WO2023111384A1 - A method, an apparatus and a computer program product for video encoding and video decoding - Google Patents
A method, an apparatus and a computer program product for video encoding and video decoding Download PDFInfo
- Publication number
- WO2023111384A1 WO2023111384A1 PCT/FI2022/050732 FI2022050732W WO2023111384A1 WO 2023111384 A1 WO2023111384 A1 WO 2023111384A1 FI 2022050732 W FI2022050732 W FI 2022050732W WO 2023111384 A1 WO2023111384 A1 WO 2023111384A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- bitstream
- base
- filter
- input
- layer
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 51
- 238000004590 computer program Methods 0.000 title claims abstract description 28
- 238000012986 modification Methods 0.000 claims abstract description 28
- 230000004048 modification Effects 0.000 claims abstract description 28
- 238000001914 filtration Methods 0.000 claims description 16
- 230000000153 supplemental effect Effects 0.000 claims description 5
- 238000013528 artificial neural network Methods 0.000 description 116
- 241000023320 Luma <angiosperm> Species 0.000 description 53
- OSWPMRLSEDHDFF-UHFFFAOYSA-N methyl salicylate Chemical compound COC(=O)C1=CC=CC=C1O OSWPMRLSEDHDFF-UHFFFAOYSA-N 0.000 description 53
- 238000012805 post-processing Methods 0.000 description 43
- 238000012549 training Methods 0.000 description 28
- 230000008569 process Effects 0.000 description 24
- 238000013139 quantization Methods 0.000 description 22
- 239000013598 vector Substances 0.000 description 17
- 230000003044 adaptive effect Effects 0.000 description 16
- 238000013507 mapping Methods 0.000 description 16
- 230000001537 neural effect Effects 0.000 description 15
- 238000001514 detection method Methods 0.000 description 14
- 230000006835 compression Effects 0.000 description 11
- 238000007906 compression Methods 0.000 description 11
- 230000006870 function Effects 0.000 description 11
- 230000006978 adaptation Effects 0.000 description 10
- 238000007781 pre-processing Methods 0.000 description 10
- 238000013459 approach Methods 0.000 description 9
- 230000011218 segmentation Effects 0.000 description 8
- 230000002123 temporal effect Effects 0.000 description 8
- 208000031212 Autoimmune polyendocrinopathy Diseases 0.000 description 7
- 235000019395 ammonium persulphate Nutrition 0.000 description 6
- 238000000261 appearance potential spectroscopy Methods 0.000 description 6
- 230000000007 visual effect Effects 0.000 description 6
- 241000282412 Homo Species 0.000 description 5
- 230000009471 action Effects 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 5
- 230000000875 corresponding effect Effects 0.000 description 5
- 230000011664 signaling Effects 0.000 description 5
- 238000010200 validation analysis Methods 0.000 description 5
- 238000003491 array Methods 0.000 description 4
- 230000002146 bilateral effect Effects 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 238000013461 design Methods 0.000 description 4
- 238000010801 machine learning Methods 0.000 description 4
- 230000007246 mechanism Effects 0.000 description 4
- 238000005457 optimization Methods 0.000 description 4
- 230000002159 abnormal effect Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 229910003460 diamond Inorganic materials 0.000 description 3
- 239000010432 diamond Substances 0.000 description 3
- 238000009826 distribution Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 2
- 230000002596 correlated effect Effects 0.000 description 2
- 238000007405 data analysis Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000013442 quality metrics Methods 0.000 description 2
- 230000000306 recurrent effect Effects 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000006837 decompression Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 210000001513 elbow Anatomy 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 210000004247 hand Anatomy 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 210000003127 knee Anatomy 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
- 210000000707 wrist Anatomy 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
- H04N19/86—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
- G06N3/0455—Auto-encoder networks; Encoder-decoder networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/34—Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/54—Extraction of image or video features relating to texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/14—Coding unit complexity, e.g. amount of activity or edge presence estimation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0475—Generative networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/094—Adversarial learning
Definitions
- JU Joint Undertaking
- JU Joint Undertaking
- the present solution generally relates to video coding for machines.
- Video Coding for Machines VCM
- an apparatus for decoding comprising means for receiving an input bitstream, wherein the input bitstream comprises an encoded video stream generated by a base-layer encoder; and means for modifying the received input bitstream for improving task performance of one or more machine tasks.
- an apparatus for encoding comprising means for receiving an input video sequence; means for encoding the input video sequence by a base-layer encoder to a bitstream comprising an encoded video stream; means for generating moderator control information for causing modifications of the encoded video stream for improving task performance of one or more machine tasks; and means for including the moderator control information in or along the bitstream.
- a method for decoding comprising receiving an input bitstream, wherein the input bitstream comprises an encoded video stream generated by a base-layer encoder; and modifying the received input bitstream for improving task performance of one or more machine tasks.
- a method for encoding comprising receiving an input video sequence; encoding the input video sequence by a baselayer encoder to a bitstream comprising an encoded video stream; generating moderator control information for causing modifications of the encoded video stream for improving task performance of one or more machine tasks; and including the moderator control information in or along the bitstream.
- an apparatus for decoding comprising at least one processor, memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: receive an input bitstream, wherein the input bitstream comprises an encoded video stream generated by a base-layer encoder; and modify the received input bitstream for improving task performance of one or more machine tasks.
- an apparatus for encoding comprising at least one processor, memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: receive an input video sequence; encode the input video sequence by a base-layer encoder to a bitstream comprising an encoded video stream; generate moderator control information for causing modifications of the encoded video stream for improving task performance of one or more machine tasks; and include the moderator control information in or along the bitstream.
- computer program product comprising computer program code configured to, when executed on at least one processor, cause an apparatus or a system to: receive an input bitstream, wherein the input bitstream comprises an encoded video stream generated by a base-layer encoder; and modify the received input bitstream for improving task performance of one or more machine tasks.
- computer program product comprising computer program code configured to, when executed on at least one processor, cause an apparatus or a system to: receive an input video sequence; encode the input video sequence by a base-layer encoder to a bitstream comprising an encoded video stream; generate moderator control information for causing modifications of the encoded video stream for improving task performance of one or more machine tasks; and include the moderator control information in or along the bitstream.
- the modified bitstream is decoded by a base-layer decoder, and one or more machine tasks are applied to the decoded pictures.
- caused modification of the encoded video stream comprise one or more of the following: switching an in-loop filter on or off; adjusting parameters of an in-loop filter.
- an in-loop filter is switched on or off and parameter of the in-loop filter is adjusted.
- adjusting parameter of an in-lop filter comprises one or both of the following: adjusting the parameters of the in-loop filter to cause sharpening of blocks classified to comprise edges; adjusting the parameter of the in-loop filter to cause blurring or smoothening of blocks that are classified to comprise texture.
- post-filter information is generated for adjusting filtering of decoded pictures for improving task performance of one or more machine tasks.
- moderator control information is received, and the received bitstream is modified based on the moderator control information.
- the decoded pictures are filtered prior to applying one or more machine tasks.
- post-filter control information is received, and the decoded pictures are filtered based on the post-filter control information.
- information indicative of a subset of pictures subject to be modified for improving task performance of one or more machine tasks is generated.
- information indicative of a subset of pictures subject to be modified for improving task performance of one or more machine tasks is received and interpreted.
- the input bitstream comprises a base-layer bitstream and the base-layer bitstream comprises supplemental information to modify the base-layer bitstream or to filter decoded pictures.
- the computer program product is embodied on a non- transitory computer readable medium.
- Fig. 1 shows an example of a codec with neural network (NN) components
- Fig. 2 shows another example of a video coding system with neural network components
- Fig. 3 shows an example of a neural auto-encoder architecture
- Fig. 4 shows an example of a neural network-based end-to-end learned video coding system
- Fig. 5 shows an example of a video coding for machines
- Fig. 6 shows an example of a pipeline for end-to-end learned system
- Fig. 7 shows an example of training an end-to-end learned system
- Fig. 8 shows an example of an end-to-end learned image or video codec
- Fig. 9 shows an example of a moderator component at the VCM decoder side
- Fig. 10 shows an example of a moderator component at the VCM encoder side
- Fig. 11 shows an example of a moderator component with multiple bitstream outputs at the VCM decoder side
- Fig. 12 shows an example of a moderator component for feature residual enhanced coding
- Fig. 13a is a flowchart illustrating a method for decoding according to an embodiment
- Fig. 13b is a flowchart illustrating a method for encoding according to an embodiment
- Fig. 14 illustrates an apparatus according to an embodiment.
- the present embodiments are targeted to a video coding system for machines.
- a neural network is a computation graph consisting of several layers of computation. Each layer consists of one or more units, where each unit performs an elementary computation. A unit is connected to one or more other units, and the connection may have associated with a weight. The weight may be used for scaling the signal passing through the associated connection. Weights are learnable parameters, i.e., values which can be learned from training data. There may be other learnable parameters, such as those of batch-normalization layers.
- Feed-forward neural networks are such that there is no feedback loop: each layer takes input from one or more of the layers before and provides its output as the input for one or more of the subsequent layers. Also, units inside a certain layer take input from units in one or more of preceding layers and provide output to one or more of following layers.
- Initial layers extract semantically low-level features such as edges and textures in images, and intermediate and final layers extract more high-level features.
- semantically low-level features such as edges and textures in images
- intermediate and final layers extract more high-level features.
- After the feature extraction layers there may be one or more layers performing a certain task, such as classification, semantic segmentation, object detection, denoising, style transfer, super-resolution, etc.
- recurrent neural nets there is a feedback loop, so that the network becomes stateful, i.e., it is able to memorize information or a state.
- Neural networks are being utilized in an ever-increasing number of applications for many different types of devices, such as mobile phones. Examples include image and video analysis and processing, social media data analysis, device usage data analysis, etc.
- neural networks are able to learn properties from input data, either in supervised way or in unsupervised way. Such learning is a result of a training algorithm, or of a meta-level neural network providing the training signal.
- the training algorithm consists of changing some properties of the neural network so that its output is as close as possible to a desired output.
- the output of the neural network can be used to derive a class or category index which indicates the class or category that the object in the input image belongs to.
- Training usually happens by minimizing or decreasing the output’s error, also referred to as the loss. Examples of losses are mean squared error, cross-entropy, etc.
- training is an iterative process, where at each iteration the algorithm modifies the weights of the neural net to make a gradual improvement of the network’s output, i.e., to gradually decrease the loss.
- Training a neural network is an optimization process.
- the goal of the optimization or training process is to make the model learn the properties of the data distribution from a limited training dataset.
- the goal is to learn to use a limited training dataset in order to learn to generalize to previously unseen data, i.e., data which was not used for training the model. This is usually referred to as generalization.
- data may be split into at least two sets, the training set and the validation set.
- the training set is used for training the network, i.e., to modify its learnable parameters in order to minimize the loss.
- the validation set is used for checking the performance of the network on data, which was not used to minimize the loss, as an indication of the final performance of the model.
- the errors on the training set and on the validation set are monitored during the training process to understand the following things:
- the validation set error needs to decrease and to be not too much higher than the training set error. If the training set error is low, but the validation set error is much higher than the training set error, or it does not decrease, or it even increases, the model is in the regime of overfitting. This means that the model has just memorized the training set’s properties and performs well only on that set but performs poorly on a set not used for tuning its parameters.
- neural networks have been used for compressing and de-compressing data such as images, i.e., in an image codec.
- the most widely used architecture for realizing one component of an image codec is the auto-encoder, which is a neural network consisting of two parts: a neural encoder and a neural decoder.
- the neural encoder takes as input an image and produces a code which requires less bits than the input image. This code may be obtained by applying a binarization or quantization process to the output of the encoder.
- the neural decoder takes in this code and reconstructs the image which was input to the neural encoder.
- Such neural encoder and neural decoder may be trained to minimize a combination of bitrate and distortion, where the distortion may be based on one or more of the following metrics: Mean Squared Error (MSE), Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), or similar.
- MSE Mean Squared Error
- PSNR Peak Signal-to-Noise Ratio
- SSIM Structural Similarity Index Measure
- Video codec comprises an encoder that transforms the input video into a compressed representation suited for storage/transmission and a decoder that can decompress the compressed video representation back into a viewable form.
- An encoder may discard some information in the original video sequence in order to represent the video in a more compact form (that is, at lower bitrate).
- the H.264/AVC standard was developed by the Joint Video Team (JVT) of the Video Coding Experts Group (VCEG) of the Telecommunications Standardization Sector of International Telecommunication Union (ITU-T) and the Moving Picture Experts Group (MPEG) of International Organisation for Standardization (ISO) / International Electrotechnical Commission (IEC).
- JVT Joint Video Team
- VCEG Video Coding Experts Group
- MPEG Moving Picture Experts Group
- ISO International Organization for Standardization
- IEC International Electrotechnical Commission
- the H.264/AVC standard is published by both parent standardization organizations, and it is referred to as ITU- T Recommendation H.264 and ISO/IEC International Standard 14496-10, also known as MPEG-4 Part 10 Advanced Video Coding (AVC).
- Extensions of the H.264/AVC include Scalable Video Coding (SVC) and Multiview Video Coding (MVC).
- H.265/HEVC a.k.a. HEVC High Efficiency Video Coding
- JCT-VC Joint Collaborative Team - Video Coding
- the standard was published by both parent standardization organizations, and it is referred to as ITU-T Recommendation H.265 and ISO/IEC International Standard 23008-2, also known as MPEG-H Part 2 High Efficiency Video Coding (HEVC).
- HEVC MPEG-H Part 2 High Efficiency Video Coding
- H.266 a.k.a. WC Versatile Video Coding
- ISO/IEC 23090-3 ISO/IEC 23090-3
- An elementary unit for the input to a video encoder and the output of a video decoder, respectively, in most cases is a picture.
- a picture given as an input to an encoder may also be referred to as a source picture, and a picture decoded by a decoder may be referred to as a decoded picture or a reconstructed picture.
- the source and decoded pictures are each comprises of one or more sample arrays, such as one of the following sets of sample arrays:
- RGB Green, Blue and Red
- a component may be defined as an array or single sample from one of the three sample arrays (luma and two chroma) that compose a picture, or the array or a single sample of the array that compose a picture in monochrome format.
- Hybrid video codecs may encode the video information in two phases. Firstly, pixel values in a certain picture area (or “block”) are predicted for example by motion compensation means (finding and indicating an area in one of the previously coded video frames that corresponds closely to the block being coded) or by spatial means (using the pixel values around the block to be coded in a specified manner). Secondly the prediction error, i.e., the difference between the predicted block of pixels and the original block of pixels, is coded.
- motion compensation means finding and indicating an area in one of the previously coded video frames that corresponds closely to the block being coded
- spatial means using the pixel values around the block to be coded in a specified manner.
- encoder can control the balance between the accuracy of the pixel representation (picture quality) and size of the resulting coded video representation (file size or transmission bitrate).
- a specified transform e.g., Discrete Cosine Transform (DCT) or a variant of it
- DCT Discrete Cosine Transform
- encoder can control the balance between the accuracy of the pixel representation (picture quality) and size of the resulting coded video representation (file size or transmission bitrate).
- Inter prediction which may also be referred to as temporal prediction, motion compensation, or motion-compensated prediction, exploits temporal redundancy.
- inter prediction the sources of prediction are previously decoded pictures.
- Intra prediction utilizes the fact that adjacent pixels within the same picture are likely to be correlated. Intra prediction can be performed in spatial or transform domain, i.e., either sample values or transform coefficients can be predicted. Intra prediction is typically exploited in intra coding, where no inter prediction is applied.
- One outcome of the coding procedure is a set of coding parameters, such as motion vectors and quantized transform coefficients. Many parameters can be entropy- coded more efficiently if they are predicted first from spatially or temporally neighboring parameters. For example, a motion vector may be predicted from spatially adjacent motion vectors and only the difference relative to the motion vector predictor may be coded. Prediction of coding parameters and intra prediction may be collectively referred to as in-picture prediction.
- the decoder reconstructs the output video by applying prediction means similar to the encoder to form a predicted representation of the pixel blocks (using the motion or spatial information created by the encoder and stored in the compressed representation) and prediction error decoding (inverse operation of the prediction error coding recovering the quantized prediction error signal in spatial pixel domain). After applying prediction and prediction error decoding means, the decoder sums up the prediction and prediction error signals (pixel values) to form the output video frame.
- the decoder (and encoder) can also apply additional filtering means to improve the quality of the output video before passing it for display and/or storing it as prediction reference for the forthcoming frames in the video sequence.
- the motion information may be indicated with motion vectors associated with each motion compensated image block.
- Each of these motion vectors represents the displacement of the image block in the picture to be coded (in the encoder side) or decoded (in the decoder side) and the prediction source block in one of the previously coded or decoded pictures.
- those may be coded differentially with respect to block specific predicted motion vectors.
- the predicted motion vectors may be created in a predefined way, for example calculating the median of the encoded or decoded motion vectors of the adjacent blocks.
- Another way to create motion vector predictions is to generate a list of candidate predictions from adjacent blocks and/or co-located blocks in temporal reference pictures and signaling the chosen candidate as the motion vector predictor.
- the reference index of previously coded/decoded picture can be predicted.
- the reference index is typically predicted from adjacent blocks and/or or co-located blocks in temporal reference picture.
- high efficiency video codecs can employ an additional motion information coding/decoding mechanism, often called merging/merge mode, where all the motion field information, which includes motion vector and corresponding reference picture index for each available reference picture list, is predicted and used without any modification/correction.
- predicting the motion field information may be carried out using the motion field information of adjacent blocks and/or co-located blocks in temporal reference pictures and the used motion field information is signaled among a list of motion field candidate list filled with motion field information of available adjacent/co-located blocks.
- the prediction residual after motion compensation may be first transformed with a transform kernel (like DCT) and then coded.
- a transform kernel like DCT
- Video encoders may utilize Lagrangian cost functions to find optimal coding modes, e.g., the desired Macroblock mode and associated motion vectors.
- This kind of cost function uses a weighting factor to tie together the (exact or estimated) image distortion due to lossy coding methods and the (exact or estimated) amount of information that is required to represent the pixel values in an image area:
- C D + AR
- C the Lagrangian cost to be minimized
- D the image distortion (e.g., Mean Squared Error) with the mode and motion vectors considered
- R the number of bits needed to represent the required data to reconstruct the image block in the decoder (including the amount of data to represent the candidate motion vectors).
- a partitioning may be defined as a division of a set into subsets such that each element of the set is in exactly one of the subsets.
- a bitstream may be defined as a sequence of bits, which may in some coding formats or standards be in the form of a network abstraction layer (NAL) unit stream or a byte stream, that forms the representation of coded pictures and associated data forming one or more coded video sequences.
- NAL network abstraction layer
- a bitstream format may comprise a sequence of syntax structures.
- a syntax element may be defined as an element of data represented in the bitstream.
- a syntax structure may be defined as zero or more syntax elements present together in the bitstream in a specified order.
- a NAL unit may be defined as a syntax structure containing an indication of the type of data to follow and bytes containing that data in the form of an RBSP interspersed as necessary with start code emulation prevention bytes.
- a raw byte sequence payload (RBSP) may be defined as a syntax structure containing an integer number of bytes that is encapsulated in a NAL unit.
- An RBSP is either empty or has the form of a string of data bits containing syntax elements followed by an RBSP stop bit and followed by zero or more subsequent bits equal to 0.
- a parameter may be defined as a syntax element of a parameter set.
- a parameter set may be defined as a syntax structure that contains parameters and that can be referred to from or activated by another syntax structure for example using an identifier.
- a coding standard or specification may specify several types of parameter sets. It needs to be understood that embodiments may be applied but are not limited to the described types of parameter sets and embodiments could likewise be applied to any parameter set type.
- a parameter set may be activated when it is referenced e.g., through its identifier.
- An adaptation parameter set (APS) may be defined as a syntax structure that applies to zero or more slices. There may be different types of adaptation parameter sets.
- An adaptation parameter set may for example contain filtering parameters for a particular type of a filter. In WC, three types of APSs are specified carrying parameters for one of: adaptive loop filter (ALF), luma mapping with chroma scaling (LMCS), and scaling lists.
- a scaling list may be defined as a list that associates each frequency index with a scale factor for the scaling process, which multiplies transform coefficient levels by a scaling factor, resulting in transform coefficients.
- an APS is referenced through its type (e.g., ALF, LMCS, or scaling list) and an identifier.
- ALF e.g., ALF
- LMCS e.g., L1
- identifier e.g., a Wi-Fi identifier
- different types of APSs have their own identifier value ranges.
- An Adaptation Parameter Set (APS) may comprise parameters for decoding processes of different types, such as adaptive loop filtering or luma mapping with chroma scaling.
- Video coding specifications may enable the use of supplemental enhancement information (SEI) messages or alike.
- SEI supplemental enhancement information
- Some video coding specifications include SEI network abstraction layer (NAL) units, and some video coding specifications contain both prefix SEI NAL units and suffix SEI NAL units, where the former type can start a picture unit or alike and the latter type can end a picture unit or alike.
- An SEI NAL unit contains one or more SEI messages, which are not required for the decoding of output pictures but may assist in related processes, such as picture output timing, post-processing of decoded pictures, rendering, error detection, error concealment, and resource reservation.
- SEI messages are specified in H.264/AVC, H.265/HEVC, H.266/VVC, and H.274/VSEI standards, and the user data SEI messages enable organizations and companies to specify SEI messages for their own use.
- the standards may contain the syntax and semantics for the specified SEI messages but a process for handling the messages in the recipient might not be defined. Consequently, encoders may be required to follow the standard specifying a SEI message when they create SEI message(s), and decoders might not be required to process SEI messages for output order conformance.
- One of the reasons to include the syntax and semantics of SEI messages in standards is to allow different system specifications to interpret the supplemental information identically and hence interoperate. It is intended that system specifications can require the use of particular SEI messages both in the encoding end and in the decoding end, and additionally the process for handling particular SEI messages in the recipient can be specified.
- SEI messages are generally not extended in future amendments or versions of the standard.
- the phrase along the bitstream (e.g., indicating along the bitstream) or along a coded unit of a bitstream (e.g., indicating along a coded tile) may be used in claims and described embodiments to refer to transmission, signaling, or storage in a manner that the "out-of-band" data is associated with but not included within the bitstream or the coded unit, respectively.
- the phrase decoding along the bitstream or along a coded unit of a bitstream or alike may refer to decoding the referred out- of-band data (which may be obtained from out-of-band transmission, signaling, or storage) that is associated with the bitstream or the coded unit, respectively.
- the phrase along the bitstream may be used when the bitstream is contained in a container file, such as a file conforming to the ISO Base Media File Format, and certain file metadata is stored in the file in a manner that associates the metadata to the bitstream, such as boxes in the sample entry for a track containing the bitstream, a sample group for the track containing the bitstream, or a timed metadata track associated with the track containing the bitstream.
- a container file such as a file conforming to the ISO Base Media File Format
- certain file metadata is stored in the file in a manner that associates the metadata to the bitstream, such as boxes in the sample entry for a track containing the bitstream, a sample group for the track containing the bitstream, or a timed metadata track associated with the track containing the bitstream.
- Image and video codecs may use a set of filters to enhance the visual quality of the predicted visual content and can be applied either in-loop or out-of-loop, or both.
- in-loop filters the filter applied on one block in the currently-encoded frame will affect the encoding of another block in the same frame and/or in another frame which is predicted from the current frame.
- An in-loop filter can affect the bitrate and/or the visual quality. In fact, an enhanced block will cause a smaller residual (difference between original block and predicted-and-filtered block), thus requiring less bits to be encoded.
- An out-of-the loop filter will be applied on a frame after it has been reconstructed, the filtered visual content won't be as a source for prediction, and thus it may only impact the visual quality of the frames that are output by the decoder.
- In-loop filters in a video/image encoder and decoder may comprise, but may not be limited to, one or more of the following:
- DPF - deblocking filter
- ALF adaptive loop filter
- CC-ALF cross-component adaptive loop filter
- a deblocking filter may be configured to reduce blocking artefacts due to blockbased coding.
- a deblocking filter may be applied (only) to samples located at prediction unit (Pll) and/or transform unit (Til) boundaries, except at the picture boundaries or when disabled at slice and/or tiles boundaries.
- Horizontal filtering may be applied (first) for vertical boundaries, and vertical filtering may be applied for horizontal boundaries.
- a sample adaptive offset may be another in-loop filtering process that modifies decoded samples by conditionally adding an offset value to a sample (possibly to each sample), based on values in look-up tables transmitted by the encoder.
- SAO may have one or more (e.g., two) operation modes; band offset and edge offset modes.
- band offset mode an offset may be added to the sample value depending on the sample amplitude.
- the full sample amplitude range may be divided into a number of bands (e.g., 32 bands), and sample values belonging to four of these bands may be modified by adding a positive or negative offset, which may be signalled for each coding tree unit (CTU).
- CTU coding tree unit
- the horizontal, vertical, and two diagonal gradients may be used for classification.
- An Adaptive Loop Filter may apply block-based filter adaptation. For example, for the luma component, one among 25 filters may be selected for each 4x4 block, based on the direction and activity of local gradients, which are derived using the samples values of that 4x4 block.
- the ALF classification may be performed on 2x2 block units, for instance. When all of the vertical, horizontal and diagonal gradients are below a first threshold value, the block may be classified as texture (not containing edges). Otherwise, the block may be classified to contain edges, a dominant edge direction may be derived from horizontal, vertical and diagonal gradients, and a strength of the edge (e.g. strong or weak) may be further derived from the gradient values.
- the filtering may be performed by applying a 7x7 diamond filter, for example, to the luma component.
- An ALF filter set may comprise one filter for each chroma component, and a 5x5 diamond filter may be applied to the chroma components, for example.
- the filter coefficients use point-symmetry relative to the center point.
- An ALF design may comprise clipping the difference between the neighboring sample value and the current to-be-filtered sample is added, which provides adaptability related to both spatial relationship and value similarity between samples.
- cross-component ALF uses luma sample values to refine each chroma component by applying an adaptive linear filter to the luma channel and then using the output of this filtering operation for chroma refinement.
- Filtering in CC-ALF is accomplished by applying a linear, diamond shaped filter to the luma channel.
- ALF filter parameters are signalled in Adaptation Parameter Set (APS). For example, in one APS, up to 25 sets of luma filter coefficients and clipping value indices, and up to eight sets of chroma filter coefficients and clipping value indices could be signalled. To reduce the overhead, filter coefficients of different classification for luma component can be merged.
- slice header the identifiers of the APSs used for the current slice are signaled.
- ALF APS indices can be signaled to specify the luma filter sets that are used for the current slice.
- the filtering process can be further controlled at coding tree block (CTB) level.
- CTB coding tree block
- a flag is signalled to indicate whether ALF is applied to a luma CTB.
- a filter set among 16 fixed filter sets and the filter sets from APSs selected in the slice header may be selected per each luma CTB by the encoder and may be decoded per each luma CTB by the decoder.
- a filter set index is signaled for a luma CTB to indicate which filter set is applied.
- the 16 fixed filter sets are pre-defined in the WC standard and hardcoded in both the encoder and the decoder.
- the 16 fixed filter sets may be referred to as the pre-defined ALFs.
- LMCS luma mapping with chroma scaling
- the luma sample values of an input video signal to the encoder and output video signal from the decoder are represented in the original (unmapped) sample domain.
- Forward luma mapping maps luma sample values from the original sample domain to the mapped sample domain.
- Inverse luma mapping maps luma sample values from the mapped sample domain to the original sample domain.
- the processes in the mapped sample domain include inverse quantization, inverse transform, luma intra prediction and summing the luma prediction with the luma residue values.
- the processes in the original sample domain include in-loop filters (e.g., deblocking, SAO, ALF), inter prediction, and storage of pictures in the decoded picture buffer (DPB).
- one or more of the following steps may be performed:
- Y’ r Reconstructed luma sample values in the mapped sample domain, Y’ r , are obtained by summing Y’ re s with the corresponding predicted luma values in the mapped sample domain, Y’ pred- - For intra prediction, Y’ pr ed is directly obtained by performing intra prediction in mapped sample domain.
- the predicted luma values in original sample domain, Ypred are first obtained by motion compensation using reference pictures from the DPB, and then forward luma mapping is applied to produce the luma values in the mapped sample domain, Y’ pr ed.
- - Inverse luma mapping is applied to reconstructed values Y’ r to produce reconstructed luma sample values in the original sample domain, which are processed by in-loop filters (deblocking, sample adaptive offset, and adaptive loop filter) before being stored in the DPB.
- in-loop filters deblocking, sample adaptive offset, and adaptive loop filter
- LMCS syntax elements are signalled in an adaptation parameter set (APS) with aps_params_type equal to 1 (LMCS_APS).
- APS adaptation parameter set
- aps_params_type equal to 1
- aps_adaptation_parameter_set_id The value range for an adaptation parameter set identifier (aps_adaptation_parameter_set_id) is from 0 to 3, inclusive, for LMCS APSs.
- the use of LMCS can be enabled or disabled in a picture header.
- the LMCS APS identifier value used for the picture (ph_lmcs_aps_id) is included in the picture header.
- the same LMCS parameters are used for entire picture.
- the chroma scaling part can be enabled or disabled in the picture header through ph_chroma_residual_scale_flag.
- LMCS is further enabled or disabled in the slice header for each slice.
- LMCS data within an LMCS APS comprises syntax related to a piecewise linear model of up to 16 pieces for luma mapping.
- the luma sample value range of the piecewise linear forward mapping function is uniformly sampled into 16 pieces of same length OrgCW.
- OrgCW 64 input codewords.
- binCW[i] binCW[i] is determined at the encoding process.
- the difference between binCW[i] and OrgCW is signalled in LMCS APS.
- NNs neural networks
- traditional codec such as WC/H.266
- traditional refers to those codecs whose components and their parameters may not be learned from data. Examples of such components are:
- Additional in-loop filter for example by having the NN as an additional in-loop filter with respect to the traditional loop filters.
- Figure 1 illustrates examples of functioning of NNs as components of a traditional codec's pipeline, in accordance with an embodiment.
- Figure 1 illustrates an encoder, which also includes a decoding loop.
- Figure 1 is shown to include components described below:
- a luma intra pred block or circuit 101 This block or circuit performs intra prediction in the luma domain, for example, by using already reconstructed data from the same frame.
- the operation of the luma intra pred block or circuit 101 may be performed by a deep neural network such as a convolutional auto-encoder.
- a chroma intra pred block or circuit 102 This block or circuit performs intra prediction in the chroma domain, for example, by using already reconstructed data from the same frame.
- the chroma intra pred block or circuit 102 may perform crosscomponent prediction, for example, predicting chroma from luma.
- the operation of the chroma intra pred block or circuit 102 may be performed by a deep neural network such as a convolutional auto-encoder.
- An intra pred block or circuit 103 and inter-pred block or circuit 104 These blocks or circuit perform intra prediction and inter-prediction, respectively.
- the intra pred block or circuit 103 and the inter-pred block or circuit 104 may perform the prediction on all components, for example, luma and chroma.
- the operations of the intra pred block or circuit 103 and inter-pred block or circuit 104 may be performed by two or more deep neural networks such as convolutional auto-encoders.
- a probability estimation block or circuit 105 for entropy coding This block or circuit performs prediction of probability for the next symbol to encode or decode, which is then provided to the entropy coding module 112, such as the arithmetic coding module, to encode or decode the next symbol.
- the operation of the probability estimation block or circuit 105 may be performed by a neural network.
- transform and quantization block or circuit 106 may perform a transform of input data to a different domain, for example, the FFT transform would transform the data to frequency domain.
- the transform and quantization block or circuit 106 may quantize its input values to a smaller set of possible values.
- there may be inverse quantization block or circuit and inverse transform block or circuit 113.
- One or both of the transform block or circuit and quantization block or circuit may be replaced by one or two or more neural networks.
- One or both of the inverse transform block or circuit and inverse quantization block or circuit 113 may be replaced by one or two or more neural networks.
- An in-loop filter block or circuit 107 Operations of the in-loop filter block or circuit 107 is performed in the decoding loop, and it performs filtering on the output of the inverse transform block or circuit, or anyway on the reconstructed data, in order to enhance the reconstructed data with respect to one or more predetermined quality metrics. This filter may affect both the quality of the decoded data and the bitrate of the bitstream output by the encoder.
- the operation of the in-loop filter may be performed by multiple steps or filters, where the one or more steps may be performed by neural networks.
- the postprocessing filter block or circuit 108 A postprocessing filter block or circuit 108.
- the postprocessing filter block or circuit 108 may be performed only at decoder side, as it may not affect the encoding process.
- the postprocessing filter block or circuit 108 filters the reconstructed data output by the in-loop filter block or circuit 107, in order to enhance the reconstructed data.
- the postprocessing filter block or circuit 108 may be replaced by a neural network, such as a convolutional auto-encoder.
- a resolution adaptation block or circuit 109 this block or circuit may downsample the input video frames, prior to encoding. Then, in the decoding loop, the reconstructed data may be upsampled, by the upsampling block or circuit 110, to the original resolution.
- the operation of the resolution adaptation block or circuit 109 block or circuit may be performed by a neural network such as a convolutional autoencoder.
- An encoder control block or circuit 111 This block or circuit performs optimization of encoder's parameters, such as what transform to use, what quantization parameters (QP) to use, what intra-prediction mode (out of N intra-prediction modes) to use, and the like.
- the operation of the encoder control block or circuit 111 may be performed by a neural network, such as a classifier convolutional network, or such as a regression convolutional network.
- An ME/MC block or circuit 114 performs motion estimation and/or motion compensation, which are two key operations to be performed when performing interframe prediction.
- ME/MC stands for motion estimation / motion compensation.
- NNs are used as the main components of the image/video codecs.
- end-to-end learned compression there are two main options:
- Option 1 re-use the video coding pipeline but replace most or all the components with NNs.
- FIG 2 it illustrates an example of modified video coding pipeline based on a neural network, in accordance with an embodiment.
- An example of neural network may include, but is not limited to, a compressed representation of a neural network.
- Figure 2 is shown to include following components:
- a neural transform block or circuit 202 this block or circuit transforms the output of a summation/subtraction operation 203 to a new representation of that data, which may have lower entropy and thus be more compressible.
- a quantization block or circuit 204 this block or circuit quantizes an input data 201 to a smaller set of possible values.
- An inverse transform and inverse quantization blocks or circuits 206 These blocks or circuits perform the inverse or approximately inverse operation of the transform and the quantization, respectively.
- An encoder parameter control block or circuit 208 This block or circuit may control and optimize some or all the parameters of the encoding process, such as parameters of one or more of the encoding blocks or circuits.
- An entropy coding block or circuit 210 This block or circuit may perform lossless coding, for example based on entropy.
- One popular entropy coding technique is arithmetic coding.
- a neural intra-codec block or circuit 212 This block or circuit may be an image compression and decompression block or circuit, which may be used to encode and decode an intra frame.
- An encoder 214 may be an encoder block or circuit, such as the neural encoder part of an auto-encoder neural network.
- a decoder 216 may be a decoder block or circuit, such as the neural decoder part of an auto-encoder neural network.
- An intra-coding block or circuit 218 may be a block or circuit performing some intermediate steps between encoder and decoder, such as quantization, entropy encoding, entropy decoding, and/or inverse quantization.
- a deep loop filter block or circuit 220 This block or circuit performs filtering of reconstructed data, in order to enhance it.
- a decode picture buffer block or circuit 222 is a memory buffer, keeping the decoded frame, for example, reconstructed frames 224 and enhanced reference frames 226 to be used for inter prediction.
- An inter-prediction block or circuit 228 This block or circuit performs interframe prediction, for example, predicts from frames, for example, frames 232, which are temporally nearby.
- An ME/MC 230 performs motion estimation and/or motion compensation, which are two key operations to be performed when performing inter-frame prediction.
- ME/MC stands for motion estimation / motion compensation.
- Option 2 re-design the whole pipeline, as follows.
- - Encoder NN is configured to perform a non-linear transform
- - Decoder NN is configured to perform a non-linear inverse transform.
- FIG. 3 shows an encoder NN and a decoder NN being parts of a neural auto-encoder architecture, in accordance with an example.
- the Analysis Network 301 is an Encoder NN
- the Synthesis Network 302 is the Decoder NN, which may together be referred to as spatial correlation tools 303, or as neural auto-encoder.
- the input data 304 is analyzed by the Encoder NN (Analysis Network 301 ), which outputs a new representation of that input data.
- the new representation may be more compressible.
- This new representation may then be quantized, by a quantizer 305, to a discrete number of values.
- the quantized data is then lossless encoded, for example by an arithmetic encoder 306, thus obtaining a bitstream 307.
- the example shown in Figure 3 includes an arithmetic decoder 308 and an arithmetic encoder 306.
- the arithmetic encoder 306, or the arithmetic decoder 308, or the combination of the arithmetic encoder 306 and arithmetic decoder 308 may be referred to as arithmetic codec in some embodiments.
- the bitstream is first lossless decoded, for example, by using the arithmetic codec decoder 308.
- the lossless decoded data is dequantized and then input to the Decoder NN, Synthesis Network 302.
- the output is the reconstructed or decoded data 309.
- the lossy steps may comprise the Encoder NN and/or the quantization.
- a training objective function (also called “training loss”) may be utilized, which may comprise one or more terms, or loss terms, or simply losses.
- the training loss comprises a reconstruction loss term and a rate loss term.
- the reconstruction loss encourages the system to decode data that is similar to the input data, according to some similarity metric. Examples of reconstruction losses are:
- MS-SSIM Multi-scale structural similarity
- error(f1 , f2) where f1 and f2 are the features extracted by a pretrained neural network for the input data and the decoded data, respectively, and error() is an error or distance function, such as L1 norm or L2 norm;
- GANs Generative Adversarial Networks
- the rate loss encourages the system to compress the output of the encoding stage, such as the output of the arithmetic encoder.
- compressing we mean reducing the number of bits output by the encoding stage.
- rate loss typically encourages the output of the Encoder NN to have low entropy.
- rate losses are the following:
- a sparsification loss i.e., a loss that encourages the output of the Encoder NN or the output of the quantization to have many zeros. Examples are L0 norm, L1 norm, L1 norm divided by L2 norm;
- One or more of reconstruction losses may be used, and one or more of the rate losses may be used, as a weighted sum.
- the different loss terms may be weighted using different weights, and these weights determine how the final system performs in terms of rate-distortion loss. For example, if more weight is given to the reconstruction losses with respect to the rate losses, the system may learn to compress less but to reconstruct with higher accuracy (as measured by a metric that correlates with the reconstruction losses).
- These weights may be considered to be hyper-parameters of the training session and may be set manually by the person designing the training session, or automatically for example by grid search or by using additional neural networks.
- a neural network-based end-to-end learned video coding system may contain an encoder 401 , a quantizer 402, a probability model 403, an entropy codec 420 (for example arithmetic encoder 405 / arithmetic decoder 406), a dequantizer 407, and a decoder 408.
- the encoder 401 and decoder 408 may be two neural networks, or mainly comprise neural network components.
- the probability model 403 may also comprise mainly neural network components.
- Quantizer 402, dequantizer 407 and entropy codec 420 may not be based on neural network components, but they may also comprise neural network components, potentially.
- the encoder component 401 takes a video x 409 as input and converts the video from its original signal space into a latent representation that may comprise a more compressible representation of the input.
- the latent representation may be a 3-dimensional tensor, where two dimensions represent the vertical and horizontal spatial dimensions, and the third dimension represent the “channels” which contain information at that specific location.
- the latent representation is a tensor of dimensions (or “shape”) 64x64x32 (i.e., with horizontal size of 64 elements, vertical size of 64 elements, and 32 channels).
- the channel dimension may be the first dimension, so for the above example, the shape of the input tensor may be represented as 3x128x128, instead of 128x128x3.
- another dimension in the input tensor may be used to represent temporal information.
- the quantizer component 402 quantizes the latent representation into discrete values given a predefined set of quantization levels.
- Probability model 403 and arithmetic codec component 420 work together to perform lossless compression for the quantized latent representation and generate bitstreams to be sent to the decoder side.
- the probability model 403 estimates the probability distribution of all possible values for that symbol based on a context that is constructed from available information at the current encoding/decoding state, such as the data that has already been encoded/decoded.
- the arithmetic encoder 405 encodes the input symbols to bitstream using the estimated probability distributions.
- the arithmetic decoder 406 and the probability model 403 first decode symbols from the bitstream to recover the quantized latent representation. Then the dequantizer 407 reconstructs the latent representation in continuous values and pass it to decoder 408 to recover the input video/image. Note that the probability model 403 in this system is shared between the encoding and decoding systems. In practice, this means that a copy of the probability model 403 is used at encoder side, and another exact copy is used at decoder side.
- the encoder 401 , probability model 403, and decoder 408 may be based on deep neural networks.
- the system may be trained in an end-to-end manner by minimizing the following rate-distortion loss function:
- the distortion loss term may be the mean square error (MSE), structure similarity (SSIM) or other metrics that evaluate the quality of the reconstructed video. Multiple distortion losses may be used and integrated into D, such as a weighted sum of MSE and SSIM.
- the rate loss term is normally the estimated entropy of the quantized latent representation, which indicates the number of bits necessary to represent the encoded symbols, for example, bits-per-pixel (bpp).
- the system may contain only the probability model 403 and arithmetic encoder/decoder 405, 406.
- the system loss function contains only the rate loss, since the distortion loss is always zero (i.e., no loss of information).
- Reducing the distortion in image and video compression is often intended to increase human perceptual quality, as humans are considered to be the end users, i.e., consuming/watching the decoded image.
- machines i.e., autonomous agents
- Examples of such analysis are object detection, scene classification, semantic segmentation, video event detection, anomaly detection, pedestrian tracking, etc.
- Example use cases and applications are self-driving cars, video surveillance cameras and public safety, smart sensor networks, smart TV and smart advertisement, person re-identification, smart traffic monitoring, drones, etc.
- VCM Video Coding for Machines
- VCM concerns the encoding of video streams to allow consumption for machines.
- Machine is referred to indicate any device except human.
- Example of machine can be a mobile phone, an autonomous vehicle, a robot, and such intelligent devices which may have a degree of autonomy or run an intelligent algorithm to process the decoded stream beyond reconstructing the original input stream.
- a machine may perform one or multiple tasks on the decoded stream. Examples of tasks can comprise the following:
- Classification classify an image or video into one or more predefined categories.
- the output of a classification task may be a set of detected categories, also known as classes or labels.
- the output may also include the probability and confidence of each predefined category.
- - Object detection detect one or more objects in a given image or video.
- the output of an object detection task may be the bounding boxes and the associated classes of the detected objects.
- the output may also include the probability and confidence of each detected object.
- the output of an instance segmentation task may be binary mask images or other representations of the binary mask images, e.g., closed contours, of the detected objects.
- the output may also include the probability and confidence of each object for each pixel.
- - Semantic segmentation assign the pixels in an image or video to one or more predefined semantic categories.
- the output of a semantic segmentation task may be binary mask images or other representations of the binary mask images, e.g., closed contours, of the assigned categories.
- the output may also include the probability and confidence of each semantic category for each pixel.
- - Object tracking track one or more objects in a video sequence.
- the output of an object tracking task may include frame index, object ID, object bounding boxes, probability, and confidence for each tracked object.
- - Captioning generate one or more short text descriptions for an input image or video.
- the output of the captioning task may be one or more short text sequences.
- - Human pose estimation estimate the position of the key points, e.g., wrist, elbows, knees, etc., from one or more human bodies in an image of the video.
- the output of a human pose estimation includes sets of locations of each key point of a human body detected in the input image or video.
- Human action recognition recognize the actions, e.g., walking, talking, shaking hands, of one or more people in an input image or video.
- the output of the human action recognition may be a set of predefined actions, probability, and confidence of each identified action.
- - Anomaly detection detect abnormal object or event from an input image or video.
- the output of an anomaly detection may include the locations of detected abnormal objects or segments of frames where abnormal events detected in the input video.
- the receiver-side device has multiple “machines” or task neural networks (Task-NNs). These multiple machines may be used in a certain combination which is for example determined by an orchestrator sub-system. The multiple machines may be used for example in succession, based on the output of the previously used machine, and/or in parallel. For example, a video which was compressed and then decompressed may be analyzed by one machine (NN) for detecting pedestrians, by another machine (another NN) for detecting cars, and by another machine (another NN) for estimating the depth of all the pixels in the frames.
- NN machine
- another NN for detecting cars
- another machine another NN
- task machine and “machine” and “task neural network” are referred to interchangeably, and for such referral any process or algorithm (learned or not from data) which analyzes or processes data for a certain task is meant.
- term “receiverside” or “decoder-side” are used to refer to the physical or abstract entity or device, which contains one or more machines, and runs these one or more machines on an encoded and eventually decoded video representation which is encoded by another physical or abstract entity or device, the “encoder-side device”.
- the encoded video data may be stored into a memory device, for example as a file. The stored file may later be provided to another device. Alternatively, the encoded video data may be streamed from one device to another.
- FIG. 5 is a general illustration of the pipeline of Video Coding for Machines.
- a VCM encoder 502 encodes the input video into a bitstream 504.
- a bitrate 506 may be computed 508 from the bitstream 504 in order to evaluate the size of the bitstream.
- a VCM decoder 510 decodes the bitstream output by the VCM encoder 502.
- the output of the VCM decoder 510 is referred to as “Decoded data for machines” 512. This data may be considered as the decoded or reconstructed video. However, in some implementations of this pipeline, this data may not have same or similar characteristics as the original video which was input to the VCM encoder 502.
- this data may not be easily understandable by a human by simply rendering the data onto a screen.
- the output of VCM decoder is then input to one or more task neural networks 514.
- task-NNs 514 there are three example task-NNs, and a non-specified one (Task-NN X).
- the goal of VCM is to obtain a low bitrate while guaranteeing that the task-NNs still perform well in terms of the evaluation metric 516 associated to each task.
- FIG. 6 illustrates an example of a pipeline for the end- to-end learned approach.
- the video is input to a neural network encoder 601 .
- the output of the neural network encoder 601 is input to a lossless encoder 602, such as an arithmetic encoder, which outputs a bitstream 604.
- the lossless codec may be a probability model 603, both in the lossless encoder and in the lossless decoder, which predicts the probability of the next symbol to be encoded and decoded.
- the probability model 603 may also be learned, for example it may be a neural network.
- the bitstream 604 is input to a lossless decoder 605, such as an arithmetic decoder, whose output is input to a neural network decoder 606.
- the output of the neural network decoder 606 is the decoded data for machines 607, that may be input to one or more task-NNs 608.
- Figure 7 illustrates an example of how the end-to-end learned system may be trained. For the sake of simplicity, only one task-NN 707 is illustrated.
- a rate loss 705 may be computed from the output of the probability model 703. The rate loss 705 provides an approximation of the bitrate required to encode the input video data.
- a task loss 710 may be computed 709 from the output 708 of the task-NN 707.
- the rate loss 705 and the task loss 710 may then be used to train 711 the neural networks used in the system, such as the neural network encoder 701 , the probability model 703, the neural network decoder 706. Training may be performed by first computing gradients of each loss with respect to the neural networks that are contributing or affecting the computation of that loss. The gradients are then used by an optimization method, such as Adam, for updating the trainable parameters of the neural networks.
- an optimization method such as Adam
- the machine tasks may be performed at decoder side (instead of at encoder side) for multiple reasons, for example because the encoder-side device does not have the capabilities (computational, power, memory) for running the neural networks that perform these tasks, or because some aspects or the performance of the task neural networks may have changed or improved by the time that the decoder-side device needs the tasks results (e.g., different or additional semantic classes, better neural network architecture). Also, there could be a customization need, where different clients would run different neural networks for performing these machine learning tasks.
- a video codec for machines can be realized by using a traditional codec such as H.266A/VC.
- another possible design may comprise using a traditional "base" codec, such as H.266/VVC, which additionally comprises one or more neural networks.
- the one or more neural networks may replace or be an alternative of one of the components of the traditional codec, such as:
- the one or more neural networks may function as an additional component, such as:
- another possible design may comprise using any codec architecture (such as a traditional codec, or a traditional codec which includes one or more neural networks, or an end-to-end learned codec), and having a post-processing neural networks which adapts the output of the decoder so that it can be analyzed more effectively by one or more machines or task neural networks.
- the encoder and decoder may be conformant to the H.266/VVC standard, a postprocessing neural network takes the output of the decoder, and the output of the post-processing neural network is then input to an object detection neural network.
- the object detection neural network is the machine or task neural network.
- Figure 8 illustrates an example including an encoder, a decoder, a post-processing filter, a set of task-NNs.
- the encoder and decoder may represent a traditional image or video codec, such as a codec conformant with the WC/H.266 standard, or may represent an end-to-end (E2E) learned image or video codec.
- the post-processing filter may be a neural network-based filter.
- the task-NNs may be neural networks that performs tasks such as object detection, object segmentation, object tracking, etc.
- Video codecs have been developed for decades and provide sophisticated performance for human consumption. Systems that are conformant to existing standards have been broadly developed.
- a video codec for machine consumption may use an existing codec as a base layer and take advantage of the technologies. Furthermore, it may be easily adopted by the industry.
- the existing video codec for human consumption is not optimized for machine tasks. Changes to the existing codec may break the compatibility to the standards and increase the difficulty of adopting and deploying the video codec for machines.
- the present embodiments provide an improved VCM codec architecture using an existing video codec as a base-layer codec and a moderator component to adjust the bitstream of the base-layer codec to achieve better performance for the machine tasks.
- the VCM codec architecture improves the performance of the system while maintaining the compatibility of the bitstream to the existing standards, specifications and/or codec implementations.
- the performance of the system may be measured by the rate distortion loss.
- the rate distortion loss may be a weighted sum of the rate loss and the distortion loss, where the rate loss indicates the compression ratio, e.g., the size of the bitstream generated from the input data, and the distortion loss measures the performance of the one or more machine tasks.
- the performance of machine tasks e.g., the classification, object detection or object segmentation task may be measured by the mean average precision of the classification or detection results.
- the performance of one or more machine tasks may be estimated by the distance between the two sets of features extracted from the input data and the compressed data, where the distance may be mean square error (MSE) or mean absolute error (MAE) and the feature extractor may be a neural network that is pretrained for a generic task, such as classification, or a sub-network of a pretrained neural network.
- MSE mean square error
- MAE mean absolute error
- the feature extractor may be a neural network that is pretrained for a generic task, such as classification, or a sub-network of a pretrained neural network.
- a VCM codec uses an existing video codec as a base layer codec.
- the base-layer codec may be a codec that may have been developed and/or optimized for human consumption.
- Embodiments are not limited to any specific base-layer encoder, decoder, or bitstream format. Embodiments may be applied, for example, to AVC/H.264, HEVC/H.265, WC/H.266, or AV1 (specified by the Alliance of Open Media) as a base-layer encoder, decoder, and/or bitstream format.
- the VCM codec may also compromise one or more pre-processing filters at the encoder side and one or more post-processing and/or in-loop filters at the decoder side.
- the bitstream being sent from the VCM encoder to the decoder may compromise two parts.
- the first part, named base-layer bitstream is the bitstream generated by the base-layer encoder
- the second part, named VCM bitstream contains extra information to enhance the base-layer bitstream for the machine tasks.
- the base-layer codec may be optimized for human consumption, the coding settings and configurations may not be optimal for machine tasks.
- the one or more in-loop filters used in the base-layer decoder may generate reconstructed frames that degrade the performance of the machine tasks.
- a moderator component is introduced.
- the input to the moderator component is the base-layer bitstream generated by the base-layer encoder.
- the moderator component may parse the base-layer bitstream, modify the settings that may improve the performance for the machine tasks, and output a modified bitstream.
- the modified bitstream may be processed by the baselayer decoder to generate reconstructed input video.
- the modification to the bitstream by the moderator component does not impact the conformance to the existing standard or specification of the base-layer codec.
- the moderator modifies the base-layer bitstream so that the modified bitstream causes switching one or more in-loop filters on/off in the baselayer decoder.
- the moderator modifies the base-layer bitstream so that the modified bitstream causes switching one or more post-processing filters on/off in the base-layer decoder.
- the modifications may include one or more of the following:
- ALF adaptive loop filter
- in-loop filters for example, crosscomponent adaptive loop filter, deblocking filter, sample adaptive offset (SAO) filter, bilateral filter, and a neural-network-based filter;
- SAO sample adaptive offset
- the moderator modifies the base-layer bitstream so that the modified bitstream causes a different setting of one or more in-loop or postprocessing filters in the base-layer decoder.
- the modifications may include one or more of the following:
- - adjusting the parameters of the ALF filter for the machine tasks - adjusting the parameter of one or more other in-loop filters, for example, cross-component adaptive loop filter, deblocking filter, SAO filter, bilateral filter, and a neural network-based filter;
- the modifications adjusting the parameters of the ALF filter for the machine tasks comprise one or both of the following:
- the moderator modifies the base-layer bitstream so that the modified bitstream causes a different setting for the base-layer decoder in addition to or instead of switching on or off or modifying settings of in-loop or post-processing filters.
- the modifications may comprise, but may not be limited to, one or more of the following:
- the modifications adjusting the parameters of luma mapping comprise the following:
- the moderator component receives moderator control information in or along the VCM bitstream, wherein the moderator control information controls how the base-layer bitstream is modified.
- the moderator control information may comprise, but may not be limited to, one or more of the following:
- in-loop filters and/or post-processing filters may comprise, but may not be limited to, one or more of the following: adaptive loop filters, cross-component adaptive loop filters, deblocking filter, SAO filter, bilateral filter, neural network - based in-loop filters.
- the base-layer bitstream may have syntax structures and/or syntax elements corresponding to the modifications and/or the moderator control information. These syntax structures and/or syntax elements may be referred to as moderated base-layer parameters.
- a moderator component resides in the VCM decoder side to manipulate the bitstream received from the encoder.
- the moderator component signals relevant modifications to one or more post-processing filters to generate outputs for machine and/or human tasks.
- Figure 9 shows a system architecture, where a VCM moderator is at the VCM decoder side.
- the system architecture presents modules and signals that are optional, including for example the VCM preprocessing filter and the VCM-post-processing filter.
- Input data x is processed by the VCM pre-processing filter to generate an input to the base-layer encoder.
- the base-layer encoder encodes the input into a base-layer bitstream.
- the VCM moderator may parse the base-layer bitstream, modify the bitstream, and output the modified bitstream.
- the base-layer decoder decodes the modified base-layer bitstream and outputs data y, which may be regarded as the reconstructed data corresponding to the input data x.
- the one or more VCM post-processing filters may process y to generate output z for one or more machine tasks.
- the one or more VCM post-processing filters may also output reconstructed data for human consumption.
- the VCM pre-processing filter may signal moderator control information to the VCM moderator in or along the VCM bitstream.
- a VCM encoder may comprise control logic or alike that signals moderator control information in or along the VCM bitstream to the VCM moderator.
- the VCM pre-processing filter may also signal post-filter control information to assist the VCM post-processing filter to generate output for the machine tasks.
- a VCM encoder may comprise control logic or alike that signals post-filter control information in or along the VCM bitstream to the VCM post-processing filter to generate output for the machine tasks.
- the post-filter control information may comprise, but may not be limited to, one or more of the following:
- post-processing filters may comprise, but may not be limited to, one or more of the following: ALF applied as post-filter, cross-component ALF applied as postfilter, Wiener filters, cross-component Wiener filters, deblocking filter, SAO filter, bilateral filter, neural-network-based post-processing filters.
- the moderator at the decoder side may output more than one bitstream.
- One bitstream may be processed by a standard base-layer decoder to generate reconstructed frames for human consumption.
- Other bitstreams may be processed by standard base-layer decoders, and the VCM post-processing filters to generate reconstructed frames for machine consumptions.
- a moderator component may be in the VCM encoder side to manipulate the bitstream.
- the modification to the bitstream may be signaled from the encoder to the VCM decoder for the VCM post-processing filter to generate reconstructed data for the machine tasks.
- Figure 10 shows a system architecture where a VCM moderator is at the VCM encoder side. It needs to be understood that the system architecture presents modules and signals that are optional, including for example the VCM preprocessing filter and the VCM-post-processing filter.
- Input data x is processed by the VCM pre-processing filter to generate an input to the base-layer encoder.
- the base-layer encoder encodes the input into a base-layer bitstream.
- the VCM moderator may parse the base-layer bitstream, modify the bitstream, and output the modified bitstream.
- the base-layer decoder decodes the modified base-layer bitstream and output data y, which may be regarded as the reconstructed data corresponding to the input data x.
- the one or more VCM post-processing filters may process y to generate output z for one or more machine tasks.
- the one or more VCM post-processing filters may also output reconstructed data for human consumption.
- one or more VCM pre-processing filters may signal moderator control information to the VCM moderator at the encoder side.
- a VCM encoder may comprise control logic or alike that signals moderator control information to the VCM moderator.
- the VCM moderator may send post-filter control information in or along a VCM bitstream to assist the VCM post-processing filter to generate the output for the machine tasks.
- the VCM pre-processing filter may also signal post-filter control information to assist the VCM post-processing filter to generate output for the machine tasks.
- a VCM encoder may comprise control logic or alike that signals post-filter control information in or along the VCM bitstream to the VCM post-processing filter to generate output for the machine tasks.
- Figure 11 shows a VCM architecture where more than one base-layer decoder is used at the VCM decoder side.
- the VCM moderator component may output more than one bitstream.
- One bitstream may be identical to the bitstream generated by the base-layer encoder. This bitstream may be processed by a base-layer decoder to generate reconstructed input data for human consumption. Other bitstreams may be modified by the VCM moderator for better performance for machine tasks.
- FIGS 9, 10, and 11 illustrate a VCM encoder that comprises a base-layer encoder. It needs to be understood that embodiments may be similarly realized when base- layer encoding happens before VCM-aware processing and a VCM encoder gets a base-layer bitstream as input. Additionally, a VCM encoder may get reconstructed base-layer pictures as input or may decode the base-layer bitstream to obtain reconstructed base-layer pictures. The VCM encoder encodes moderator control information and/or post-filter control information in or along the VCM bitstream. Additionally, the VCM encoder may include the base-layer bitstream in or along the VCM bitstream.
- the moderator may be a NN-based system that is trained for optimizing the modifications performed by the VCM moderator and/or the moderated base-layer parameters of the base-layer bitstream for one or more machine vision tasks.
- more than one VCM moderator may be used in the encoder and/or decoder side for optimizing the base-layer bitstream for more than one machine vision tasks.
- the VCM moderator may manipulate the parameters of the baselayer bitstream for all pictures. In another embodiment, the VCM moderator manipulates the parameters of the base-layer bitstream for a subset of pictures.
- the method of selecting the subset of pictures may comprise, but may not be limited to, one or more of the following:
- the subset of pictures may comprise pictures that are not used as reference pictures or are not stored in the reference picture buffer of the base-layer codec. Consequently, the difference between a reconstructed picture of the base-layer encoder compared to the respective reconstructed picture of the base-layer decoder will not propagate temporally.
- the subset of pictures may comprise pictures preceding a random access point (RAP) picture in decoding order. Consequently, the difference between reconstructed pictures of the base-layer encoder compared to the respective reconstructed pictures of the base-layer decoder is limited to the subset of pictures and will not propagate to the RAP picture.
- RAP random access point
- the subset of pictures may comprise pictures at selected highest temporal sublayers. Consequently, the difference between reconstructed pictures of the base-layer encoder compared to the respective reconstructed pictures of the base-layer decoder is limited to certain temporal sublayers only.
- the base-layer bitstream comprises several scalability layers (e.g. for spatial or quality scalability)
- the subset of pictures may comprise pictures at selected highest scalability layers. Consequently, the difference between reconstructed pictures of the base-layer encoder compared to the respective reconstructed pictures of the base-layer decoder is limited to certain scalability layers only.
- the subset of pictures may be selected based on the quality of the pictures. For example, if the picture quality is high, it may be determined to exclude a picture from the subset of pictures, and if the picture quality is moderate or low, it may be determined to include a picture in the subset of pictures.
- a picture quality may be estimated e.g. based on one or more quantization parameter values, which may indicate a quantization step size for prediction residual.
- the VCM moderator may modify the settings of the one or more inloop filters only to the one or more non-reference frames.
- the VCM pre-processing filter or the control logic of the VCM encoder or alike includes information indicative of the subset of pictures to be modified by a VCM moderator in moderator control information.
- the VCM moderator receives, within moderator control information, information indicative of the subset of pictures to be modified.
- the VCM moderator interprets the information indicative of the subset of pictures to be modified and modifies the subset of pictures.
- the information indicative of the subset of pictures to be modified by a VCM moderator may indicate a second subset of pictures that are not to be modified by a VCM moderator, from which the remaining pictures may be concluded to form the subset of pictures.
- the VCM moderator may be applied to one or more components of image/video picture(s). For example, it may be applied to one or more of the components of the content in RGB, YUV or any other color spaces. For example, the VCM moderator may be applied to luma (Y) only, whereas chroma (U, V) can be left unmodified. In an embodiment, the VCM moderator may be applied to certain parts of the picture(s). For example, it may change the one or more of the moderated base-layer parameters in prediction unit (PU), coding unit (CU), coding tree unit (CTU), slice, tile, subpicture, or picture levels.
- PU prediction unit
- CU coding unit
- CTU coding tree unit
- the VCM encoder includes moderator control information and/or post-filter control information in a VCM bitstream together with a base-layer bitstream.
- the VCM decoder decodes moderator control information and/or post-filter control information from a VCM bitstream that also contains a base-layer bitstream.
- the VCM bitstream may be structured in a way that it encapsulates the base-layer bitstream and contains the moderator control information and/or the post-filter control information in syntax structures separate from the encapsulated base-layer bitstream.
- a VCM bitstream may comprise a sequence of VCM units.
- a VCM unit may comprise a VCM unit header and a VCM unit payload.
- the VCM unit header may comprise a type syntax element that indicates the type of data contained in the VCM unit payload, wherein the types may comprise, but may not be limited to, one or more of the following: a portion of the base-layer bitstream, moderator control information, post-filter control information.
- a portion of the base-layer bitstream may for example be a NAL unit or an RBSP according to a video coding standard, such as H.264, H.265, or H.266.
- VCM unit boundaries in the sequence of VCM units may be concluded for example in one of the following ways: a VCM unit length (e.g. in bytes) may be included in the VCM unit header; a VCM unit header may start or may be immediately preceded by a start code that it is guaranteed or unlikely to appear in locations other than VCM unit boundaries in the VCM bitstream.
- the VCM encoder includes moderator control information and/or post-filter control information in one or more SEI messages (which may be called a moderator control information SEI message and/or post-filter control information SEI message, respectively) or alike within a base-layer bitstream.
- the VCM decoder decodes moderator control information and/or post-filter control information from one or more SEI messages or alike contained in a base-layer bitstream.
- a uniform resource identifier may be defined as a string of characters used to identify a name of a resource. Such identification enables interaction with representations of the resource over a network, using specific protocols.
- a URI is defined through a scheme specifying a concrete syntax and associated protocol for the URL.
- the uniform resource locator (URL) and the uniform resource name (URN) are forms of URL
- a URL may be defined as a URI that identifies a web resource and specifies the means of acting upon or obtaining the representation of the resource, specifying both its primary access mechanism and network location.
- a URN may be defined as a URI that identifies a resource by name in a particular namespace. A URN may be used for identifying a resource without implying its location or how to access it.
- a moderator control SEI message or alike may comprise, but may not be limited to, one or more of the following:
- One or more syntax elements indicating one or more machine tasks for which this moderator control SEI message provides suitable modifications of the base-layer bitstream may for example be an index to a pre-defined enumerated list of machine tasks or a URI identifying a machine task.
- Such a syntax element may for example be a URI identifying a machine task NN.
- the SEI message or alike may comprise syntax elements carrying parameters, such as filter coefficients, of a particular in-loop filter.
- the SEI message or alike may comprise one or more adaptation_parameter_set_rbsp( ) syntax structures, carrying ALF APSs that the VCM moderator may use to replace the respective ALF APS NAL units in the base-layer bitstream.
- a post-filter control SEI message or alike may comprise, but may not be limited to, one or more of the following:
- One or more syntax elements indicating one or more machine tasks for which this post-filter control SEI message provides or characterizes a suitable postprocessing filter may for example be an index to a pre-defined enumerated list of machine tasks or a URI identifying a machine task.
- One or more syntax elements identifying one or more machine task NNs or algorithms for which this post-filter control SEI message provides or characterizes a suitable post-filter may for example be a URI identifying a machine task NN.
- the SEI message or alike may comprise syntax elements carrying parameters, such as filter coefficients, of a particular post-processing filter.
- the SEI message or alike may identify or carry a neural network post-processing filter.
- the modifications for example, the parameters for one or more in-loop filters and/or post-processing filters, to the base-layer bitstream or part of such information could become available via some “out-of-band” mechanism.
- information for fetching such data could be signaled or sent via SEI messages.
- the VCM encoder indicates moderator control information and/or post-filter control information along a VCM bitstream or a base-layer bitstream.
- the VCM decoder decodes moderator control information and/or post-filter control information along a VCM bitstream or a base-layer bitstream.
- Mechanisms for indicating information along a VCM or base-layer bitstream may comprise, but may not be limited to, one or more of the following:
- a first track carries moderator control information and/or post-filter control information
- a second track carries a VCM or base-layer bitstream, wherein a track may conform or be similar to ISO base media file format track
- - metadata such as a sample group or sample auxiliary information conforming or similar to the ISO base media file format, carries moderator control information and/or post-filter control information and is associated with a track carrying a VCM or base-layer bitstream, wherein a track may, for example, conform or be similar to ISO base media file format track;
- a first representation or stream carries moderator control information and/or post-filter control information
- a second representation or stream carries a VCM or base-layer bitstream
- a stream or representation may, for example, conform or be similar to a Representation of ISO/IEC 23009-1 (known as MPEG DASH) or a Real-time Transport Protocol stream as specified by the Internet Engineering Task Force (IETF);
- a media description carries moderator control information and/or post-filter control information, wherein a media description may, for example, conform to or be similar to IETF Session Description Protocol or MPEG DASH Media Presentation Description.
- the system architecture may comprise more than two bitstreams/sub-bitstreams where one or more of the bitstreams may be used for human consumption purposes and one or more bitstreams may be used for one or more machine vision tasks.
- the bitstreams for both human and machine consumption may be coded with traditional codecs.
- one or more of bitstreams for human and/or machine consumption are coded with traditional codecs and one or more are coded with a neural network-based codecs.
- one or more moderator components may be used for bitstreams of traditional codecs in order to change/optimize one or more of the previously described parameters of the bitstream. An example of such system is illustrated in Figure 12.
- the moderators’ change in different bitstreams may be different.
- the NN-based moderator and the post-filter may be trained jointly.
- the method for decoding is shown in Figure 13a.
- the method generally comprises receiving 1350 an input bitstream, wherein the input bitstream comprises an encoded video stream generated by a base-layer encoder; and modifying 1360 the received input bitstream for improving task performance of one or more machine tasks.
- Each of the steps can be implemented by a respective module of a computer system.
- An apparatus comprises means for receiving an input bitstream, wherein the input bitstream comprises an encoded video stream generated by a base-layer encoder; and means for modifying the received input bitstream for improving task performance of one or more machine tasks.
- the means comprises at least one processor, and a memory including a computer program code, wherein the processor may further comprise processor circuitry.
- the memory and the computer program code are configured to, with the at least one processor, cause the apparatus to perform the method of Figure 13a according to various embodiments.
- the method for encoding according to an embodiment is shown in Figure 13b.
- the method generally comprises receiving 1310 an input video sequence; encoding 1320 the input video sequence by a base-layer encoder to a bitstream comprising an encoded video stream; generating 1330 moderator control information for causing modifications of the encoded video stream for improving task performance of one or more machine tasks; and including 1340 the moderator control information in or along the bitstream.
- An apparatus comprises means for receiving an input video sequence; means for encoding the input video sequence by a base-layer encoder to a bitstream comprising an encoded video stream; means for generating moderator control information for causing modifications of the encoded video stream for improving task performance of one or more machine tasks; and means for including the moderator control information in or along the bitstream.
- the means comprises at least one processor, and a memory including a computer program code, wherein the processor may further comprise processor circuitry.
- the memory and the computer program code are configured to, with the at least one processor, cause the apparatus to perform the method of Figure 13b according to various embodiments.
- the apparatus is a user equipment for the purposes of the present embodiments.
- the apparatus 90 comprises a main processing unit 91 , a memory 92, a user interface 94, a communication interface 93.
- the apparatus may also comprise a camera module 95.
- the apparatus may be configured to receive image and/or video data from an external camera device over a communication network.
- the memory 92 stores data including computer program code in the apparatus 90.
- the computer program code is configured to implement the method according to various embodiments by means of various computer modules.
- the camera module 95 or the communication interface 93 receives data, in the form of images or video stream, to be processed by the processor 91.
- the communication interface 93 forwards processed data, i.e., the image file, for example to a display of another device, such a virtual reality headset.
- the apparatus 90 is a video source comprising the camera module 95
- user inputs may be received from the user interface.
- the various embodiments can be implemented with the help of computer program code that resides in a memory and causes the relevant apparatuses to carry out the method.
- a device may comprise circuitry and electronics for handling, receiving and transmitting data, computer program code in a memory, and a processor that, when running the computer program code, causes the device to carry out the features of an embodiment.
- a network device like a server may comprise circuitry and electronics for handling, receiving and transmitting data, computer program code in a memory, and a processor that, when running the computer program code, causes the network device to carry out the features of various embodiments.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The embodiments relate to a method for encoding and decoding. The method for encoding comprises receiving (1310) an input video sequence; encoding (1320) the input video sequence by a base-layer encoder to a bitstream comprising an encoded video stream; generating (1330) moderator control information for causing modifications of the encoded video stream for improving task performance of one or more machine tasks; and including (1340) the moderator control information in or along the bitstream. The method for decoding comprises receiving (1350) an input bitstream, wherein the input bitstream comprises an encoded video stream generated by a base-layer encoder; and modifying (1360) the received input bitstream for improving task performance of one or more machine tasks. The embodiments also relate to apparatuses and computer program products for implementing the methods.
Description
A METHOD, AN APPARATUS AND A COMPUTER PROGRAM PRODUCT FOR VIDEO ENCODING AND VIDEO DECODING
The project leading to this application has received funding from the ECSEL Joint Undertaking (JU) under grant agreement No 783162. The JU receives support from the European Union's Horizon 2020 research and innovation program and Netherlands, Czech Republic, Finland, Spain, Italy.
The project leading to this application has received funding from the ECSEL Joint Undertaking (JU) under grant agreement No 876019. The JU receives support from the European Union's Horizon 2020 research and innovation program and Germany, Netherlands, Austria, Romania, France, Sweden, Cyprus, Greece, Lithuania, Portugal, Italy, Finland, Turkey.
Technical Field
The present solution generally relates to video coding for machines.
Background
One of the elements in image and video compression is to compress data while maintaining the quality to satisfy human perceptual ability. However, in recent development of machine learning, machines can replace humans when analyzing data for example in order to detect events and/or objects in video/image. Thus, when decoded image data is consumed by machines, the quality of the compression can be different from the human approved quality. Therefore, a concept Video Coding for Machines (VCM) has been provided.
Summary
The scope of protection sought for various embodiments of the invention is set out by the independent claims. The embodiments and features, if any, described in this specification that do not fall under the scope of the independent claims are to be interpreted as examples useful for understanding various embodiments of the invention.
Various aspects include a method, an apparatus and a computer readable medium comprising a computer program stored therein, which are characterized by what is stated in the independent claims. Various embodiments are disclosed in the dependent claims.
According to a first aspect, there is provided an apparatus for decoding comprising means for receiving an input bitstream, wherein the input bitstream comprises an encoded video stream generated by a base-layer encoder; and means for modifying the received input bitstream for improving task performance of one or more machine tasks.
According to a second aspect, there is provided an apparatus for encoding, comprising means for receiving an input video sequence; means for encoding the input video sequence by a base-layer encoder to a bitstream comprising an encoded video stream; means for generating moderator control information for causing modifications of the encoded video stream for improving task performance of one or more machine tasks; and means for including the moderator control information in or along the bitstream.
According to a third aspect, there is provided a method for decoding comprising receiving an input bitstream, wherein the input bitstream comprises an encoded video stream generated by a base-layer encoder; and modifying the received input bitstream for improving task performance of one or more machine tasks.
According to a fourth aspect, there is provided a method for encoding comprising receiving an input video sequence; encoding the input video sequence by a baselayer encoder to a bitstream comprising an encoded video stream; generating moderator control information for causing modifications of the encoded video stream for improving task performance of one or more machine tasks; and including the moderator control information in or along the bitstream.
According to a fifth aspect, there is provided an apparatus for decoding comprising at least one processor, memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: receive an input bitstream, wherein the input bitstream comprises an encoded video stream generated by a base-layer
encoder; and modify the received input bitstream for improving task performance of one or more machine tasks.
According to a sixth aspect, there is provided an apparatus for encoding comprising at least one processor, memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: receive an input video sequence; encode the input video sequence by a base-layer encoder to a bitstream comprising an encoded video stream; generate moderator control information for causing modifications of the encoded video stream for improving task performance of one or more machine tasks; and include the moderator control information in or along the bitstream.
According to a seventh aspect, there is provided computer program product comprising computer program code configured to, when executed on at least one processor, cause an apparatus or a system to: receive an input bitstream, wherein the input bitstream comprises an encoded video stream generated by a base-layer encoder; and modify the received input bitstream for improving task performance of one or more machine tasks.
According to an eighth aspect, there is provided computer program product comprising computer program code configured to, when executed on at least one processor, cause an apparatus or a system to: receive an input video sequence; encode the input video sequence by a base-layer encoder to a bitstream comprising an encoded video stream; generate moderator control information for causing modifications of the encoded video stream for improving task performance of one or more machine tasks; and include the moderator control information in or along the bitstream.
According to an embodiment, the modified bitstream is decoded by a base-layer decoder, and one or more machine tasks are applied to the decoded pictures.
According to an embodiment, caused modification of the encoded video stream comprise one or more of the following: switching an in-loop filter on or off; adjusting parameters of an in-loop filter.
According to an embodiment, an in-loop filter is switched on or off and parameter of the in-loop filter is adjusted.
According to an embodiment, adjusting parameter of an in-lop filter comprises one or both of the following: adjusting the parameters of the in-loop filter to cause sharpening of blocks classified to comprise edges; adjusting the parameter of the in-loop filter to cause blurring or smoothening of blocks that are classified to comprise texture.
According to an embodiment, post-filter information is generated for adjusting filtering of decoded pictures for improving task performance of one or more machine tasks.
According to an embodiment, moderator control information is received, and the received bitstream is modified based on the moderator control information.
According to an embodiment, the decoded pictures are filtered prior to applying one or more machine tasks.
According to an embodiment, post-filter control information is received, and the decoded pictures are filtered based on the post-filter control information.
According to an embodiment, information indicative of a subset of pictures subject to be modified for improving task performance of one or more machine tasks is generated.
According to an embodiment, information indicative of a subset of pictures subject to be modified for improving task performance of one or more machine tasks is received and interpreted.
According to an embodiment, the input bitstream comprises a base-layer bitstream and the base-layer bitstream comprises supplemental information to modify the base-layer bitstream or to filter decoded pictures.
According to an embodiment, the computer program product is embodied on a non- transitory computer readable medium.
Description of the Drawings
In the following, various embodiments will be described in more detail with reference to the appended drawings, in which
Fig. 1 shows an example of a codec with neural network (NN) components;
Fig. 2 shows another example of a video coding system with neural network components;
Fig. 3 shows an example of a neural auto-encoder architecture;
Fig. 4 shows an example of a neural network-based end-to-end learned video coding system;
Fig. 5 shows an example of a video coding for machines;
Fig. 6 shows an example of a pipeline for end-to-end learned system;
Fig. 7 shows an example of training an end-to-end learned system;
Fig. 8 shows an example of an end-to-end learned image or video codec;
Fig. 9 shows an example of a moderator component at the VCM decoder side;
Fig. 10 shows an example of a moderator component at the VCM encoder side;
Fig. 11 shows an example of a moderator component with multiple bitstream outputs at the VCM decoder side;
Fig. 12 shows an example of a moderator component for feature residual enhanced coding;
Fig. 13a is a flowchart illustrating a method for decoding according to an embodiment;
Fig. 13b is a flowchart illustrating a method for encoding according to an embodiment, and
Fig. 14 illustrates an apparatus according to an embodiment.
Description of Example Embodiments
The following description and drawings are illustrative and are not to be construed as unnecessarily limiting. The specific details are provided for a thorough understanding of the disclosure. However, in certain instances, well-known or conventional details are not described in order to avoid obscuring the description. References to one or an embodiment in the present disclosure can be, but not necessarily are, reference to the same embodiment and such references mean at least one of the embodiments.
Reference in this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment in included in at least one embodiment of the disclosure.
The present embodiments are targeted to a video coding system for machines.
Before discussing the present embodiments in more detailed manner, a short reference to related technology is given.
A neural network (NN) is a computation graph consisting of several layers of computation. Each layer consists of one or more units, where each unit performs an elementary computation. A unit is connected to one or more other units, and the connection may have associated with a weight. The weight may be used for scaling the signal passing through the associated connection. Weights are learnable parameters, i.e., values which can be learned from training data. There may be other learnable parameters, such as those of batch-normalization layers.
Two widely used architectures for neural networks are feed-forward and recurrent architectures. Feed-forward neural networks are such that there is no feedback loop:
each layer takes input from one or more of the layers before and provides its output as the input for one or more of the subsequent layers. Also, units inside a certain layer take input from units in one or more of preceding layers and provide output to one or more of following layers.
Initial layers (those close to the input data) extract semantically low-level features such as edges and textures in images, and intermediate and final layers extract more high-level features. After the feature extraction layers there may be one or more layers performing a certain task, such as classification, semantic segmentation, object detection, denoising, style transfer, super-resolution, etc. In recurrent neural nets, there is a feedback loop, so that the network becomes stateful, i.e., it is able to memorize information or a state.
Neural networks are being utilized in an ever-increasing number of applications for many different types of devices, such as mobile phones. Examples include image and video analysis and processing, social media data analysis, device usage data analysis, etc.
One of the important properties of neural networks (and other machine learning tools) is that they are able to learn properties from input data, either in supervised way or in unsupervised way. Such learning is a result of a training algorithm, or of a meta-level neural network providing the training signal.
In general, the training algorithm consists of changing some properties of the neural network so that its output is as close as possible to a desired output. For example, in the case of classification of objects in images, the output of the neural network can be used to derive a class or category index which indicates the class or category that the object in the input image belongs to. Training usually happens by minimizing or decreasing the output’s error, also referred to as the loss. Examples of losses are mean squared error, cross-entropy, etc. In recent deep learning techniques, training is an iterative process, where at each iteration the algorithm modifies the weights of the neural net to make a gradual improvement of the network’s output, i.e., to gradually decrease the loss.
In this description, terms “model” and “neural network” are used interchangeably, and also the weights of neural networks are sometimes referred to as learnable parameters or simply as parameters.
Training a neural network is an optimization process. The goal of the optimization or training process is to make the model learn the properties of the data distribution from a limited training dataset. In other words, the goal is to learn to use a limited training dataset in order to learn to generalize to previously unseen data, i.e., data which was not used for training the model. This is usually referred to as generalization. In practice, data may be split into at least two sets, the training set and the validation set. The training set is used for training the network, i.e., to modify its learnable parameters in order to minimize the loss. The validation set is used for checking the performance of the network on data, which was not used to minimize the loss, as an indication of the final performance of the model. In particular, the errors on the training set and on the validation set are monitored during the training process to understand the following things:
- If the network is learning at all - in this case, the training set error should decrease, otherwise the model is in the regime of underfitting.
- If the network is learning to generalize - in this case, also the validation set error needs to decrease and to be not too much higher than the training set error. If the training set error is low, but the validation set error is much higher than the training set error, or it does not decrease, or it even increases, the model is in the regime of overfitting. This means that the model has just memorized the training set’s properties and performs well only on that set but performs poorly on a set not used for tuning its parameters.
Lately, neural networks have been used for compressing and de-compressing data such as images, i.e., in an image codec. The most widely used architecture for realizing one component of an image codec is the auto-encoder, which is a neural network consisting of two parts: a neural encoder and a neural decoder. The neural encoder takes as input an image and produces a code which requires less bits than the input image. This code may be obtained by applying a binarization or quantization process to the output of the encoder. The neural decoder takes in this code and reconstructs the image which was input to the neural encoder.
Such neural encoder and neural decoder may be trained to minimize a combination of bitrate and distortion, where the distortion may be based on one or more of the following metrics: Mean Squared Error (MSE), Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), or similar. These distortion metrics are meant to be correlated to the human visual perception quality, so that minimizing or
maximizing one or more of these distortion metrics results into improving the visual quality of the decoded image as perceived by humans.
Video codec comprises an encoder that transforms the input video into a compressed representation suited for storage/transmission and a decoder that can decompress the compressed video representation back into a viewable form. An encoder may discard some information in the original video sequence in order to represent the video in a more compact form (that is, at lower bitrate).
The H.264/AVC standard was developed by the Joint Video Team (JVT) of the Video Coding Experts Group (VCEG) of the Telecommunications Standardization Sector of International Telecommunication Union (ITU-T) and the Moving Picture Experts Group (MPEG) of International Organisation for Standardization (ISO) / International Electrotechnical Commission (IEC). The H.264/AVC standard is published by both parent standardization organizations, and it is referred to as ITU- T Recommendation H.264 and ISO/IEC International Standard 14496-10, also known as MPEG-4 Part 10 Advanced Video Coding (AVC). Extensions of the H.264/AVC include Scalable Video Coding (SVC) and Multiview Video Coding (MVC).
The High Efficiency Video Coding (H.265/HEVC a.k.a. HEVC) standard was developed by the Joint Collaborative Team - Video Coding (JCT-VC) of VCEG and MPEG. The standard was published by both parent standardization organizations, and it is referred to as ITU-T Recommendation H.265 and ISO/IEC International Standard 23008-2, also known as MPEG-H Part 2 High Efficiency Video Coding (HEVC). Later versions of H.265/HEVC included scalable, multiview, fidelity range, three-dimensional, and screen content coding extensions which may be abbreviated SHVC, MV-HEVC, REXT, 3D-HEVC, and SCC, respectively.
Versatile Video Coding (H.266 a.k.a. WC), defined in ITU-T Recommendation H.266 and equivalently in ISO/IEC 23090-3, (also referred to as MPEG-I Part 3) is a video compression standard developed as the successor to HEVC.
An elementary unit for the input to a video encoder and the output of a video decoder, respectively, in most cases is a picture. A picture given as an input to an encoder may also be referred to as a source picture, and a picture decoded by a decoder may be referred to as a decoded picture or a reconstructed picture.
The source and decoded pictures are each comprises of one or more sample arrays, such as one of the following sets of sample arrays:
- Luma (Y) only (monochrome),
- Luma and two chroma (YCbCr or YCgCo),
- Green, Blue and Red (GBR, also known as RGB),
- Arrays representing other unspecified monochrome or tri-stimulus color samplings (for example, YZX, also known as XYZ).
A component may be defined as an array or single sample from one of the three sample arrays (luma and two chroma) that compose a picture, or the array or a single sample of the array that compose a picture in monochrome format.
Hybrid video codecs, for example ITU-T H.263 and H.264, may encode the video information in two phases. Firstly, pixel values in a certain picture area (or “block”) are predicted for example by motion compensation means (finding and indicating an area in one of the previously coded video frames that corresponds closely to the block being coded) or by spatial means (using the pixel values around the block to be coded in a specified manner). Secondly the prediction error, i.e., the difference between the predicted block of pixels and the original block of pixels, is coded. This may be done by transforming the difference in pixel values using a specified transform (e.g., Discrete Cosine Transform (DCT) or a variant of it), quantizing the coefficients and entropy coding the quantized coefficients. By varying the fidelity of the quantization process, encoder can control the balance between the accuracy of the pixel representation (picture quality) and size of the resulting coded video representation (file size or transmission bitrate).
Inter prediction, which may also be referred to as temporal prediction, motion compensation, or motion-compensated prediction, exploits temporal redundancy. In inter prediction the sources of prediction are previously decoded pictures.
Intra prediction utilizes the fact that adjacent pixels within the same picture are likely to be correlated. Intra prediction can be performed in spatial or transform domain, i.e., either sample values or transform coefficients can be predicted. Intra prediction is typically exploited in intra coding, where no inter prediction is applied.
One outcome of the coding procedure is a set of coding parameters, such as motion vectors and quantized transform coefficients. Many parameters can be entropy- coded more efficiently if they are predicted first from spatially or temporally neighboring parameters. For example, a motion vector may be predicted from spatially adjacent motion vectors and only the difference relative to the motion vector predictor may be coded. Prediction of coding parameters and intra prediction may be collectively referred to as in-picture prediction.
The decoder reconstructs the output video by applying prediction means similar to the encoder to form a predicted representation of the pixel blocks (using the motion or spatial information created by the encoder and stored in the compressed representation) and prediction error decoding (inverse operation of the prediction error coding recovering the quantized prediction error signal in spatial pixel domain). After applying prediction and prediction error decoding means, the decoder sums up the prediction and prediction error signals (pixel values) to form the output video frame. The decoder (and encoder) can also apply additional filtering means to improve the quality of the output video before passing it for display and/or storing it as prediction reference for the forthcoming frames in the video sequence.
In video codecs, the motion information may be indicated with motion vectors associated with each motion compensated image block. Each of these motion vectors represents the displacement of the image block in the picture to be coded (in the encoder side) or decoded (in the decoder side) and the prediction source block in one of the previously coded or decoded pictures. In order to represent motion vectors efficiently, those may be coded differentially with respect to block specific predicted motion vectors. In video codecs, the predicted motion vectors may be created in a predefined way, for example calculating the median of the encoded or decoded motion vectors of the adjacent blocks. Another way to create motion vector predictions is to generate a list of candidate predictions from adjacent blocks and/or co-located blocks in temporal reference pictures and signaling the chosen candidate as the motion vector predictor. In addition to predicting the motion vector values, the reference index of previously coded/decoded picture can be predicted. The reference index is typically predicted from adjacent blocks and/or or co-located blocks in temporal reference picture. Moreover, high efficiency video codecs can employ an additional motion information coding/decoding mechanism, often called merging/merge mode, where all the motion field information, which includes motion vector and corresponding reference picture index for each available reference
picture list, is predicted and used without any modification/correction. Similarly, predicting the motion field information may be carried out using the motion field information of adjacent blocks and/or co-located blocks in temporal reference pictures and the used motion field information is signaled among a list of motion field candidate list filled with motion field information of available adjacent/co-located blocks.
In video codecs the prediction residual after motion compensation may be first transformed with a transform kernel (like DCT) and then coded. The reason for this is that often there still exists some correlation among the residual and transform can in many cases help reduce this correlation and provide more efficient coding.
Video encoders may utilize Lagrangian cost functions to find optimal coding modes, e.g., the desired Macroblock mode and associated motion vectors. This kind of cost function uses a weighting factor to tie together the (exact or estimated) image distortion due to lossy coding methods and the (exact or estimated) amount of information that is required to represent the pixel values in an image area:
C = D + AR where C is the Lagrangian cost to be minimized, D is the image distortion (e.g., Mean Squared Error) with the mode and motion vectors considered, and R the number of bits needed to represent the required data to reconstruct the image block in the decoder (including the amount of data to represent the candidate motion vectors).
A partitioning may be defined as a division of a set into subsets such that each element of the set is in exactly one of the subsets.
A bitstream may be defined as a sequence of bits, which may in some coding formats or standards be in the form of a network abstraction layer (NAL) unit stream or a byte stream, that forms the representation of coded pictures and associated data forming one or more coded video sequences.
A bitstream format may comprise a sequence of syntax structures.
A syntax element may be defined as an element of data represented in the bitstream. A syntax structure may be defined as zero or more syntax elements present together in the bitstream in a specified order.
A NAL unit may be defined as a syntax structure containing an indication of the type of data to follow and bytes containing that data in the form of an RBSP interspersed as necessary with start code emulation prevention bytes. A raw byte sequence payload (RBSP) may be defined as a syntax structure containing an integer number of bytes that is encapsulated in a NAL unit. An RBSP is either empty or has the form of a string of data bits containing syntax elements followed by an RBSP stop bit and followed by zero or more subsequent bits equal to 0.
Some coding formats specify parameter sets that may carry parameter values needed for the decoding or reconstruction of decoded pictures. A parameter may be defined as a syntax element of a parameter set. A parameter set may be defined as a syntax structure that contains parameters and that can be referred to from or activated by another syntax structure for example using an identifier.
A coding standard or specification may specify several types of parameter sets. It needs to be understood that embodiments may be applied but are not limited to the described types of parameter sets and embodiments could likewise be applied to any parameter set type.
A parameter set may be activated when it is referenced e.g., through its identifier. An adaptation parameter set (APS) may be defined as a syntax structure that applies to zero or more slices. There may be different types of adaptation parameter sets. An adaptation parameter set may for example contain filtering parameters for a particular type of a filter. In WC, three types of APSs are specified carrying parameters for one of: adaptive loop filter (ALF), luma mapping with chroma scaling (LMCS), and scaling lists. A scaling list may be defined as a list that associates each frequency index with a scale factor for the scaling process, which multiplies transform coefficient levels by a scaling factor, resulting in transform coefficients. In WC, an APS is referenced through its type (e.g., ALF, LMCS, or scaling list) and an identifier. In other words, different types of APSs have their own identifier value ranges.
An Adaptation Parameter Set (APS) may comprise parameters for decoding processes of different types, such as adaptive loop filtering or luma mapping with chroma scaling.
Video coding specifications may enable the use of supplemental enhancement information (SEI) messages or alike. Some video coding specifications include SEI network abstraction layer (NAL) units, and some video coding specifications contain both prefix SEI NAL units and suffix SEI NAL units, where the former type can start a picture unit or alike and the latter type can end a picture unit or alike. An SEI NAL unit contains one or more SEI messages, which are not required for the decoding of output pictures but may assist in related processes, such as picture output timing, post-processing of decoded pictures, rendering, error detection, error concealment, and resource reservation. Several SEI messages are specified in H.264/AVC, H.265/HEVC, H.266/VVC, and H.274/VSEI standards, and the user data SEI messages enable organizations and companies to specify SEI messages for their own use. The standards may contain the syntax and semantics for the specified SEI messages but a process for handling the messages in the recipient might not be defined. Consequently, encoders may be required to follow the standard specifying a SEI message when they create SEI message(s), and decoders might not be required to process SEI messages for output order conformance. One of the reasons to include the syntax and semantics of SEI messages in standards is to allow different system specifications to interpret the supplemental information identically and hence interoperate. It is intended that system specifications can require the use of particular SEI messages both in the encoding end and in the decoding end, and additionally the process for handling particular SEI messages in the recipient can be specified. SEI messages are generally not extended in future amendments or versions of the standard.
The phrase along the bitstream (e.g., indicating along the bitstream) or along a coded unit of a bitstream (e.g., indicating along a coded tile) may be used in claims and described embodiments to refer to transmission, signaling, or storage in a manner that the "out-of-band" data is associated with but not included within the bitstream or the coded unit, respectively. The phrase decoding along the bitstream or along a coded unit of a bitstream or alike may refer to decoding the referred out- of-band data (which may be obtained from out-of-band transmission, signaling, or storage) that is associated with the bitstream or the coded unit, respectively. For example, the phrase along the bitstream may be used when the bitstream is
contained in a container file, such as a file conforming to the ISO Base Media File Format, and certain file metadata is stored in the file in a manner that associates the metadata to the bitstream, such as boxes in the sample entry for a track containing the bitstream, a sample group for the track containing the bitstream, or a timed metadata track associated with the track containing the bitstream.
Image and video codecs may use a set of filters to enhance the visual quality of the predicted visual content and can be applied either in-loop or out-of-loop, or both. In the case of in-loop filters, the filter applied on one block in the currently-encoded frame will affect the encoding of another block in the same frame and/or in another frame which is predicted from the current frame. An in-loop filter can affect the bitrate and/or the visual quality. In fact, an enhanced block will cause a smaller residual (difference between original block and predicted-and-filtered block), thus requiring less bits to be encoded. An out-of-the loop filter will be applied on a frame after it has been reconstructed, the filtered visual content won't be as a source for prediction, and thus it may only impact the visual quality of the frames that are output by the decoder.
In-loop filters in a video/image encoder and decoder may comprise, but may not be limited to, one or more of the following:
- deblocking filter (DBF);
- sample adaptive offset (SAO) filter;
- adaptive loop filter (ALF) for luma and/or chroma components;
- cross-component adaptive loop filter (CC-ALF).
A deblocking filter may be configured to reduce blocking artefacts due to blockbased coding. A deblocking filter may be applied (only) to samples located at prediction unit (Pll) and/or transform unit (Til) boundaries, except at the picture boundaries or when disabled at slice and/or tiles boundaries. Horizontal filtering may be applied (first) for vertical boundaries, and vertical filtering may be applied for horizontal boundaries.
A sample adaptive offset (SAO) may be another in-loop filtering process that modifies decoded samples by conditionally adding an offset value to a sample (possibly to each sample), based on values in look-up tables transmitted by the encoder. SAO may have one or more (e.g., two) operation modes; band offset and edge offset modes. In the band offset mode, an offset may be added to the sample
value depending on the sample amplitude. The full sample amplitude range may be divided into a number of bands (e.g., 32 bands), and sample values belonging to four of these bands may be modified by adding a positive or negative offset, which may be signalled for each coding tree unit (CTU). In the edge offset mode, the horizontal, vertical, and two diagonal gradients may be used for classification.
An Adaptive Loop Filter (ALF) may apply block-based filter adaptation. For example, for the luma component, one among 25 filters may be selected for each 4x4 block, based on the direction and activity of local gradients, which are derived using the samples values of that 4x4 block. The ALF classification may be performed on 2x2 block units, for instance. When all of the vertical, horizontal and diagonal gradients are below a first threshold value, the block may be classified as texture (not containing edges). Otherwise, the block may be classified to contain edges, a dominant edge direction may be derived from horizontal, vertical and diagonal gradients, and a strength of the edge (e.g. strong or weak) may be further derived from the gradient values. When a filter within a filter set has been selected based on the classification, the filtering may be performed by applying a 7x7 diamond filter, for example, to the luma component. An ALF filter set may comprise one filter for each chroma component, and a 5x5 diamond filter may be applied to the chroma components, for example. In an example, the filter coefficients use point-symmetry relative to the center point. An ALF design may comprise clipping the difference between the neighboring sample value and the current to-be-filtered sample is added, which provides adaptability related to both spatial relationship and value similarity between samples.
In an example, cross-component ALF (CC-ALF) uses luma sample values to refine each chroma component by applying an adaptive linear filter to the luma channel and then using the output of this filtering operation for chroma refinement. Filtering in CC-ALF is accomplished by applying a linear, diamond shaped filter to the luma channel.
In an approach, ALF filter parameters are signalled in Adaptation Parameter Set (APS). For example, in one APS, up to 25 sets of luma filter coefficients and clipping value indices, and up to eight sets of chroma filter coefficients and clipping value indices could be signalled. To reduce the overhead, filter coefficients of different classification for luma component can be merged. In slice header, the identifiers of the APSs used for the current slice are signaled.
In WC slice header, up to 7 ALF APS indices can be signaled to specify the luma filter sets that are used for the current slice. The filtering process can be further controlled at coding tree block (CTB) level. A flag is signalled to indicate whether ALF is applied to a luma CTB. A filter set among 16 fixed filter sets and the filter sets from APSs selected in the slice header may be selected per each luma CTB by the encoder and may be decoded per each luma CTB by the decoder. A filter set index is signaled for a luma CTB to indicate which filter set is applied. The 16 fixed filter sets are pre-defined in the WC standard and hardcoded in both the encoder and the decoder. The 16 fixed filter sets may be referred to as the pre-defined ALFs.
A feature known as luma mapping with chroma scaling (LMCS) is included in H.266A/VC. The luma mapping (LM) part remaps luma sample values. It may be used to use a full luma sample value range (e.g. 0 to 1023, inclusive in bit depth equal to 10 bits per sample) in content that would otherwise occupy only a subset of the range.
The luma sample values of an input video signal to the encoder and output video signal from the decoder are represented in the original (unmapped) sample domain. Forward luma mapping maps luma sample values from the original sample domain to the mapped sample domain. Inverse luma mapping maps luma sample values from the mapped sample domain to the original sample domain.
In an example codec architecture, the processes in the mapped sample domain include inverse quantization, inverse transform, luma intra prediction and summing the luma prediction with the luma residue values. The processes in the original sample domain include in-loop filters (e.g., deblocking, SAO, ALF), inter prediction, and storage of pictures in the decoded picture buffer (DPB).
In an example decoder, one or more of the following steps may be performed:
- Inverse quantization and inverse transform are applied to the decoded luma transform coefficients to produce the luma residues in the mapped sample domain, Y’res;
- Reconstructed luma sample values in the mapped sample domain, Y’r, are obtained by summing Y’res with the corresponding predicted luma values in the mapped sample domain, Y’ pred-
- For intra prediction, Y’pred is directly obtained by performing intra prediction in mapped sample domain.
- For inter prediction, the predicted luma values in original sample domain, Ypred, are first obtained by motion compensation using reference pictures from the DPB, and then forward luma mapping is applied to produce the luma values in the mapped sample domain, Y’pred.
- Inverse luma mapping is applied to reconstructed values Y’r to produce reconstructed luma sample values in the original sample domain, which are processed by in-loop filters (deblocking, sample adaptive offset, and adaptive loop filter) before being stored in the DPB.
In WC, LMCS syntax elements are signalled in an adaptation parameter set (APS) with aps_params_type equal to 1 (LMCS_APS). The value range for an adaptation parameter set identifier (aps_adaptation_parameter_set_id) is from 0 to 3, inclusive, for LMCS APSs. The use of LMCS can be enabled or disabled in a picture header. When LMCS is enabled in a picture header, the LMCS APS identifier value used for the picture (ph_lmcs_aps_id) is included in the picture header. Thus, the same LMCS parameters are used for entire picture. Note also that when LMCS is enabled in a picture header and a chroma format including the chroma components is in use, the chroma scaling part can be enabled or disabled in the picture header through ph_chroma_residual_scale_flag. When a picture has multiple slices, LMCS is further enabled or disabled in the slice header for each slice.
In WC, LMCS data within an LMCS APS comprises syntax related to a piecewise linear model of up to 16 pieces for luma mapping. The luma sample value range of the piecewise linear forward mapping function is uniformly sampled into 16 pieces of same length OrgCW. For example, for a 10-bit input video, each of the 16 pieces contains OrgCW = 64 input codewords. For each piece of index i, the number of output (mapped) codewords is defined as binCW[i], binCW[i] is determined at the encoding process. The difference between binCW[i] and OrgCW is signalled in LMCS APS. The slopes scaleY[i] and invScaleY[i] of the functions FwdMap and InvMap are respectively derived as: scaleY[i] = binCW[i] - OrgCW invScaleY[i] = OrgCW - binCW[i]
Recently, neural networks (NNs) have been used in the context of image and video compression, by following mainly two approaches.
In one approach, NNs are used to replace one or more of the components of a traditional codec such as WC/H.266. Here, term “traditional” refers to those codecs whose components and their parameters may not be learned from data. Examples of such components are:
- Additional in-loop filter, for example by having the NN as an additional in-loop filter with respect to the traditional loop filters.
- Single in-loop filter, for example by having the NN replacing all traditional inloop filters.
- Intra-frame prediction.
- Inter-frame prediction.
- Transform and/or inverse transform.
- Probability model for the arithmetic codec.
- Etc.
Figure 1 illustrates examples of functioning of NNs as components of a traditional codec's pipeline, in accordance with an embodiment. In particular, Figure 1 illustrates an encoder, which also includes a decoding loop. Figure 1 is shown to include components described below:
- A luma intra pred block or circuit 101 . This block or circuit performs intra prediction in the luma domain, for example, by using already reconstructed data from the same frame. The operation of the luma intra pred block or circuit 101 may be performed by a deep neural network such as a convolutional auto-encoder.
- A chroma intra pred block or circuit 102. This block or circuit performs intra prediction in the chroma domain, for example, by using already reconstructed data from the same frame. The chroma intra pred block or circuit 102 may perform crosscomponent prediction, for example, predicting chroma from luma. The operation of the chroma intra pred block or circuit 102 may be performed by a deep neural network such as a convolutional auto-encoder.
- An intra pred block or circuit 103 and inter-pred block or circuit 104. These blocks or circuit perform intra prediction and inter-prediction, respectively. The intra pred block or circuit 103 and the inter-pred block or circuit 104 may perform the prediction on all components, for example, luma and chroma. The operations of the intra pred
block or circuit 103 and inter-pred block or circuit 104 may be performed by two or more deep neural networks such as convolutional auto-encoders.
- A probability estimation block or circuit 105 for entropy coding. This block or circuit performs prediction of probability for the next symbol to encode or decode, which is then provided to the entropy coding module 112, such as the arithmetic coding module, to encode or decode the next symbol. The operation of the probability estimation block or circuit 105 may be performed by a neural network.
- A transform and quantization (T/Q) block or circuit 106. These are actually two blocks or circuits. The transform and quantization block or circuit 106 may perform a transform of input data to a different domain, for example, the FFT transform would transform the data to frequency domain. The transform and quantization block or circuit 106 may quantize its input values to a smaller set of possible values. In the decoding loop, there may be inverse quantization block or circuit and inverse transform block or circuit 113. One or both of the transform block or circuit and quantization block or circuit may be replaced by one or two or more neural networks. One or both of the inverse transform block or circuit and inverse quantization block or circuit 113 may be replaced by one or two or more neural networks.
- An in-loop filter block or circuit 107. Operations of the in-loop filter block or circuit 107 is performed in the decoding loop, and it performs filtering on the output of the inverse transform block or circuit, or anyway on the reconstructed data, in order to enhance the reconstructed data with respect to one or more predetermined quality metrics. This filter may affect both the quality of the decoded data and the bitrate of the bitstream output by the encoder. The operation of the in-loop filter block or circuit
107 may be performed by a neural network, such as a convolutional auto-encoder. In examples, the operation of the in-loop filter may be performed by multiple steps or filters, where the one or more steps may be performed by neural networks.
- A postprocessing filter block or circuit 108. The postprocessing filter block or circuit
108 may be performed only at decoder side, as it may not affect the encoding process. The postprocessing filter block or circuit 108 filters the reconstructed data output by the in-loop filter block or circuit 107, in order to enhance the reconstructed data. The postprocessing filter block or circuit 108 may be replaced by a neural network, such as a convolutional auto-encoder.
- A resolution adaptation block or circuit 109: this block or circuit may downsample the input video frames, prior to encoding. Then, in the decoding loop, the reconstructed data may be upsampled, by the upsampling block or circuit 110, to the original resolution. The operation of the resolution adaptation block or circuit 109 block or circuit may be performed by a neural network such as a convolutional autoencoder.
- An encoder control block or circuit 111. This block or circuit performs optimization of encoder's parameters, such as what transform to use, what quantization parameters (QP) to use, what intra-prediction mode (out of N intra-prediction modes) to use, and the like. The operation of the encoder control block or circuit 111 may be performed by a neural network, such as a classifier convolutional network, or such as a regression convolutional network.
- An ME/MC block or circuit 114 performs motion estimation and/or motion compensation, which are two key operations to be performed when performing interframe prediction. ME/MC stands for motion estimation / motion compensation.
In another approach, commonly referred to as “end-to-end learned compression”, NNs are used as the main components of the image/video codecs. In this second approach, there are two main options:
Option 1 : re-use the video coding pipeline but replace most or all the components with NNs. Referring to Figure 2, it illustrates an example of modified video coding pipeline based on a neural network, in accordance with an embodiment. An example of neural network may include, but is not limited to, a compressed representation of a neural network. Figure 2 is shown to include following components:
- A neural transform block or circuit 202: this block or circuit transforms the output of a summation/subtraction operation 203 to a new representation of that data, which may have lower entropy and thus be more compressible.
- A quantization block or circuit 204: this block or circuit quantizes an input data 201 to a smaller set of possible values.
- An inverse transform and inverse quantization blocks or circuits 206. These blocks or circuits perform the inverse or approximately inverse operation of the transform and the quantization, respectively.
- An encoder parameter control block or circuit 208. This block or circuit may control and optimize some or all the parameters of the encoding process, such as parameters of one or more of the encoding blocks or circuits.
- An entropy coding block or circuit 210. This block or circuit may perform lossless coding, for example based on entropy. One popular entropy coding technique is arithmetic coding.
- A neural intra-codec block or circuit 212. This block or circuit may be an image compression and decompression block or circuit, which may be used to encode and decode an intra frame. An encoder 214 may be an encoder block or circuit, such as the neural encoder part of an auto-encoder neural network. A decoder 216 may be a decoder block or circuit, such as the neural decoder part of an auto-encoder neural network. An intra-coding block or circuit 218 may be a block or circuit performing some intermediate steps between encoder and decoder, such as quantization, entropy encoding, entropy decoding, and/or inverse quantization.
- A deep loop filter block or circuit 220. This block or circuit performs filtering of reconstructed data, in order to enhance it.
- A decode picture buffer block or circuit 222. This block or circuit is a memory buffer, keeping the decoded frame, for example, reconstructed frames 224 and enhanced reference frames 226 to be used for inter prediction.
- An inter-prediction block or circuit 228. This block or circuit performs interframe prediction, for example, predicts from frames, for example, frames 232, which are temporally nearby. An ME/MC 230 performs motion estimation and/or motion compensation, which are two key operations to be performed when performing inter-frame prediction. ME/MC stands for motion estimation / motion compensation.
Option 2: re-design the whole pipeline, as follows.
- Encoder NN is configured to perform a non-linear transform;
- Quantization and lossless encoding of the encoder NN's output;
- Lossless decoding and dequantization;
- Decoder NN is configured to perform a non-linear inverse transform.
An example of option 2 is described in detail in Figure 3 which shows an encoder NN and a decoder NN being parts of a neural auto-encoder architecture, in accordance with an example. In Figure 3, the Analysis Network 301 is an Encoder
NN, and the Synthesis Network 302 is the Decoder NN, which may together be referred to as spatial correlation tools 303, or as neural auto-encoder.
As shown in Figure 3, the input data 304 is analyzed by the Encoder NN (Analysis Network 301 ), which outputs a new representation of that input data. The new representation may be more compressible. This new representation may then be quantized, by a quantizer 305, to a discrete number of values. The quantized data is then lossless encoded, for example by an arithmetic encoder 306, thus obtaining a bitstream 307. The example shown in Figure 3 includes an arithmetic decoder 308 and an arithmetic encoder 306. The arithmetic encoder 306, or the arithmetic decoder 308, or the combination of the arithmetic encoder 306 and arithmetic decoder 308 may be referred to as arithmetic codec in some embodiments. On the decoding side, the bitstream is first lossless decoded, for example, by using the arithmetic codec decoder 308. The lossless decoded data is dequantized and then input to the Decoder NN, Synthesis Network 302. The output is the reconstructed or decoded data 309.
In case of lossy compression, the lossy steps may comprise the Encoder NN and/or the quantization.
In order to train this system, a training objective function (also called “training loss”) may be utilized, which may comprise one or more terms, or loss terms, or simply losses. In one example, the training loss comprises a reconstruction loss term and a rate loss term. The reconstruction loss encourages the system to decode data that is similar to the input data, according to some similarity metric. Examples of reconstruction losses are:
- Mean squared error (MSE);
- Multi-scale structural similarity (MS-SSIM);
- Losses derived from the use of a pretrained neural network. For example, error(f1 , f2), where f1 and f2 are the features extracted by a pretrained neural network for the input data and the decoded data, respectively, and error() is an error or distance function, such as L1 norm or L2 norm;
- Losses derived from the use of a neural network that is trained simultaneously with the end-to-end learned codec. For example, adversarial loss can be used, which is the loss provided by a discriminator neural network that is trained adversarially with respect to the codec, following the settings
proposed in the context of Generative Adversarial Networks (GANs) and their variants.
The rate loss encourages the system to compress the output of the encoding stage, such as the output of the arithmetic encoder. By “compressing”, we mean reducing the number of bits output by the encoding stage.
When an entropy-based lossless encoder is used, such as an arithmetic encoder, the rate loss typically encourages the output of the Encoder NN to have low entropy. Example of rate losses are the following:
- A differentiable estimate of the entropy;
- A sparsification loss, i.e., a loss that encourages the output of the Encoder NN or the output of the quantization to have many zeros. Examples are L0 norm, L1 norm, L1 norm divided by L2 norm;
- A cross-entropy loss applied to the output of a probability model, where the probability model may be a NN used to estimate the probability of the next symbol to be encoded by an arithmetic encoder.
One or more of reconstruction losses may be used, and one or more of the rate losses may be used, as a weighted sum. The different loss terms may be weighted using different weights, and these weights determine how the final system performs in terms of rate-distortion loss. For example, if more weight is given to the reconstruction losses with respect to the rate losses, the system may learn to compress less but to reconstruct with higher accuracy (as measured by a metric that correlates with the reconstruction losses). These weights may be considered to be hyper-parameters of the training session and may be set manually by the person designing the training session, or automatically for example by grid search or by using additional neural networks.
As shown in Figure 4, a neural network-based end-to-end learned video coding system may contain an encoder 401 , a quantizer 402, a probability model 403, an entropy codec 420 (for example arithmetic encoder 405 / arithmetic decoder 406), a dequantizer 407, and a decoder 408. The encoder 401 and decoder 408 may be two neural networks, or mainly comprise neural network components. The probability model 403 may also comprise mainly neural network components. Quantizer 402, dequantizer 407 and entropy codec 420 may not be based on neural
network components, but they may also comprise neural network components, potentially.
On the encoder side, the encoder component 401 takes a video x 409 as input and converts the video from its original signal space into a latent representation that may comprise a more compressible representation of the input. In the case of an input image, the latent representation may be a 3-dimensional tensor, where two dimensions represent the vertical and horizontal spatial dimensions, and the third dimension represent the “channels” which contain information at that specific location. If the input image is a 128x128x3 RGB image (with horizontal size of 128 pixels, vertical size of 128 pixels, and 3 channels for the Red, Green, Blue color components), and if the encoder downsamples the input tensor by 2 and expands the channel dimension to 32 channels, then the latent representation is a tensor of dimensions (or “shape”) 64x64x32 (i.e., with horizontal size of 64 elements, vertical size of 64 elements, and 32 channels). Please note that the order of the different dimensions may differ depending on the convention which is used; in some cases, for the input image, the channel dimension may be the first dimension, so for the above example, the shape of the input tensor may be represented as 3x128x128, instead of 128x128x3. In the case of an input video (instead of just an input image), another dimension in the input tensor may be used to represent temporal information.
The quantizer component 402 quantizes the latent representation into discrete values given a predefined set of quantization levels. Probability model 403 and arithmetic codec component 420 work together to perform lossless compression for the quantized latent representation and generate bitstreams to be sent to the decoder side. Given a symbol to be encoded into the bitstream, the probability model 403 estimates the probability distribution of all possible values for that symbol based on a context that is constructed from available information at the current encoding/decoding state, such as the data that has already been encoded/decoded. Then, the arithmetic encoder 405 encodes the input symbols to bitstream using the estimated probability distributions.
On the decoder side, opposite operations are performed. The arithmetic decoder 406 and the probability model 403 first decode symbols from the bitstream to recover the quantized latent representation. Then the dequantizer 407 reconstructs the latent representation in continuous values and pass it to decoder 408 to recover the
input video/image. Note that the probability model 403 in this system is shared between the encoding and decoding systems. In practice, this means that a copy of the probability model 403 is used at encoder side, and another exact copy is used at decoder side.
In this system, the encoder 401 , probability model 403, and decoder 408 may be based on deep neural networks. The system may be trained in an end-to-end manner by minimizing the following rate-distortion loss function:
L = D + R, where D is the distortion loss term, R is the rate loss term, and A is the weight that controls the balance between the two losses. The distortion loss term may be the mean square error (MSE), structure similarity (SSIM) or other metrics that evaluate the quality of the reconstructed video. Multiple distortion losses may be used and integrated into D, such as a weighted sum of MSE and SSIM. The rate loss term is normally the estimated entropy of the quantized latent representation, which indicates the number of bits necessary to represent the encoded symbols, for example, bits-per-pixel (bpp).
For lossless video/image compression, the system may contain only the probability model 403 and arithmetic encoder/decoder 405, 406. The system loss function contains only the rate loss, since the distortion loss is always zero (i.e., no loss of information).
Reducing the distortion in image and video compression is often intended to increase human perceptual quality, as humans are considered to be the end users, i.e., consuming/watching the decoded image. Recently, with the advent of machine learning, especially deep learning, there is a rising number of machines (i.e., autonomous agents) that analyze data independently from humans and that may even take decisions based on the analysis results without human intervention. Examples of such analysis are object detection, scene classification, semantic segmentation, video event detection, anomaly detection, pedestrian tracking, etc. Example use cases and applications are self-driving cars, video surveillance cameras and public safety, smart sensor networks, smart TV and smart advertisement, person re-identification, smart traffic monitoring, drones, etc. When the decoded data is consumed by machines, a different quality metric shall be used
instead of human perceptual quality. Also, dedicated algorithms for compressing and decompressing data for machine consumption are likely to be different than those for compressing and decompressing data for human consumption. The set of tools and concepts for compressing and decompressing data for machine consumption is referred to here as Video Coding for Machines (VCM).
VCM concerns the encoding of video streams to allow consumption for machines. Machine is referred to indicate any device except human. Example of machine can be a mobile phone, an autonomous vehicle, a robot, and such intelligent devices which may have a degree of autonomy or run an intelligent algorithm to process the decoded stream beyond reconstructing the original input stream.
A machine may perform one or multiple tasks on the decoded stream. Examples of tasks can comprise the following:
- Classification: classify an image or video into one or more predefined categories. The output of a classification task may be a set of detected categories, also known as classes or labels. The output may also include the probability and confidence of each predefined category.
- Object detection: detect one or more objects in a given image or video. The output of an object detection task may be the bounding boxes and the associated classes of the detected objects. The output may also include the probability and confidence of each detected object.
- Instance segmentation: identify one or more objects in an image or video at the pixel level. The output of an instance segmentation task may be binary mask images or other representations of the binary mask images, e.g., closed contours, of the detected objects. The output may also include the probability and confidence of each object for each pixel.
- Semantic segmentation: assign the pixels in an image or video to one or more predefined semantic categories. The output of a semantic segmentation task may be binary mask images or other representations of the binary mask images, e.g., closed contours, of the assigned categories. The output may also include the probability and confidence of each semantic category for each pixel.
- Object tracking: track one or more objects in a video sequence. The output of an object tracking task may include frame index, object ID, object bounding boxes, probability, and confidence for each tracked object.
- Captioning: generate one or more short text descriptions for an input image or video. The output of the captioning task may be one or more short text sequences.
- Human pose estimation: estimate the position of the key points, e.g., wrist, elbows, knees, etc., from one or more human bodies in an image of the video. The output of a human pose estimation includes sets of locations of each key point of a human body detected in the input image or video.
- Human action recognition: recognize the actions, e.g., walking, talking, shaking hands, of one or more people in an input image or video. The output of the human action recognition may be a set of predefined actions, probability, and confidence of each identified action.
- Anomaly detection: detect abnormal object or event from an input image or video. The output of an anomaly detection may include the locations of detected abnormal objects or segments of frames where abnormal events detected in the input video.
It is likely that the receiver-side device has multiple “machines” or task neural networks (Task-NNs). These multiple machines may be used in a certain combination which is for example determined by an orchestrator sub-system. The multiple machines may be used for example in succession, based on the output of the previously used machine, and/or in parallel. For example, a video which was compressed and then decompressed may be analyzed by one machine (NN) for detecting pedestrians, by another machine (another NN) for detecting cars, and by another machine (another NN) for estimating the depth of all the pixels in the frames.
In this description, “task machine” and “machine” and “task neural network” are referred to interchangeably, and for such referral any process or algorithm (learned or not from data) which analyzes or processes data for a certain task is meant. In the rest of the description, other assumptions made regarding the machines considered in this disclosure may be specified in further details. Also, term “receiverside” or “decoder-side” are used to refer to the physical or abstract entity or device, which contains one or more machines, and runs these one or more machines on an encoded and eventually decoded video representation which is encoded by another physical or abstract entity or device, the “encoder-side device”.
The encoded video data may be stored into a memory device, for example as a file. The stored file may later be provided to another device. Alternatively, the encoded video data may be streamed from one device to another.
Figure 5 is a general illustration of the pipeline of Video Coding for Machines. A VCM encoder 502 encodes the input video into a bitstream 504. A bitrate 506 may be computed 508 from the bitstream 504 in order to evaluate the size of the bitstream. A VCM decoder 510 decodes the bitstream output by the VCM encoder 502. In Figure 5, the output of the VCM decoder 510 is referred to as “Decoded data for machines” 512. This data may be considered as the decoded or reconstructed video. However, in some implementations of this pipeline, this data may not have same or similar characteristics as the original video which was input to the VCM encoder 502. For example, this data may not be easily understandable by a human by simply rendering the data onto a screen. The output of VCM decoder is then input to one or more task neural networks 514. In the figure, for the sake of illustrating that there may be any number of task-NNs 514, there are three example task-NNs, and a non-specified one (Task-NN X). The goal of VCM is to obtain a low bitrate while guaranteeing that the task-NNs still perform well in terms of the evaluation metric 516 associated to each task.
One of the possible approaches to realize video coding for machines is an end-to- end learned approach. In this approach, the VCM encoder and VCM decoder mainly consist of neural networks. Figure 6 illustrates an example of a pipeline for the end- to-end learned approach. The video is input to a neural network encoder 601 . The output of the neural network encoder 601 is input to a lossless encoder 602, such as an arithmetic encoder, which outputs a bitstream 604. The lossless codec may be a probability model 603, both in the lossless encoder and in the lossless decoder, which predicts the probability of the next symbol to be encoded and decoded. The probability model 603 may also be learned, for example it may be a neural network. At decoder-side, the bitstream 604 is input to a lossless decoder 605, such as an arithmetic decoder, whose output is input to a neural network decoder 606. The output of the neural network decoder 606 is the decoded data for machines 607, that may be input to one or more task-NNs 608.
Figure 7 illustrates an example of how the end-to-end learned system may be trained. For the sake of simplicity, only one task-NN 707 is illustrated. A rate loss 705 may be computed from the output of the probability model 703. The rate loss
705 provides an approximation of the bitrate required to encode the input video data. A task loss 710 may be computed 709 from the output 708 of the task-NN 707.
The rate loss 705 and the task loss 710 may then be used to train 711 the neural networks used in the system, such as the neural network encoder 701 , the probability model 703, the neural network decoder 706. Training may be performed by first computing gradients of each loss with respect to the neural networks that are contributing or affecting the computation of that loss. The gradients are then used by an optimization method, such as Adam, for updating the trainable parameters of the neural networks.
The machine tasks may be performed at decoder side (instead of at encoder side) for multiple reasons, for example because the encoder-side device does not have the capabilities (computational, power, memory) for running the neural networks that perform these tasks, or because some aspects or the performance of the task neural networks may have changed or improved by the time that the decoder-side device needs the tasks results (e.g., different or additional semantic classes, better neural network architecture). Also, there could be a customization need, where different clients would run different neural networks for performing these machine learning tasks.
Alternatively, to an end-to-end trained codec, a video codec for machines can be realized by using a traditional codec such as H.266A/VC.
Alternatively, as described already above for the case of video coding for humans, another possible design may comprise using a traditional "base" codec, such as H.266/VVC, which additionally comprises one or more neural networks. In one possible implementation, the one or more neural networks may replace or be an alternative of one of the components of the traditional codec, such as:
- one or more in-loop filters;
- one or more intra-prediction modes;
- one or more inter-prediction modes;
- one or more transforms;
- one or more inverse transforms;
- one or more probability models, for lossless coding;
- one or more post-processing filters.
In another possible implementation, the one or more neural networks may function as an additional component, such as:
- one or more additional in-loop filters;
- one or more additional intra-prediction modes;
- one or more additional inter-prediction modes;
- one or more additional transforms;
- one or more additional inverse transforms;
- one or more additional probability models, for lossless coding;
- one or more additional post-processing filters.
Alternatively, another possible design may comprise using any codec architecture (such as a traditional codec, or a traditional codec which includes one or more neural networks, or an end-to-end learned codec), and having a post-processing neural networks which adapts the output of the decoder so that it can be analyzed more effectively by one or more machines or task neural networks. For example, the encoder and decoder may be conformant to the H.266/VVC standard, a postprocessing neural network takes the output of the decoder, and the output of the post-processing neural network is then input to an object detection neural network. In this example, the object detection neural network is the machine or task neural network.
Figure 8 illustrates an example including an encoder, a decoder, a post-processing filter, a set of task-NNs. The encoder and decoder may represent a traditional image or video codec, such as a codec conformant with the WC/H.266 standard, or may represent an end-to-end (E2E) learned image or video codec. The post-processing filter may be a neural network-based filter. The task-NNs may be neural networks that performs tasks such as object detection, object segmentation, object tracking, etc.
Video codecs have been developed for decades and provide sophisticated performance for human consumption. Systems that are conformant to existing standards have been broadly developed. A video codec for machine consumption may use an existing codec as a base layer and take advantage of the technologies. Furthermore, it may be easily adopted by the industry. However, the existing video codec for human consumption is not optimized for machine tasks. Changes to the existing codec may break the compatibility to the standards and increase the difficulty of adopting and deploying the video codec for machines.
The present embodiments provide an improved VCM codec architecture using an existing video codec as a base-layer codec and a moderator component to adjust the bitstream of the base-layer codec to achieve better performance for the machine tasks. The VCM codec architecture according to embodiments improves the performance of the system while maintaining the compatibility of the bitstream to the existing standards, specifications and/or codec implementations. The performance of the system may be measured by the rate distortion loss. The rate distortion loss may be a weighted sum of the rate loss and the distortion loss, where the rate loss indicates the compression ratio, e.g., the size of the bitstream generated from the input data, and the distortion loss measures the performance of the one or more machine tasks. In one example, the performance of machine tasks, e.g., the classification, object detection or object segmentation task may be measured by the mean average precision of the classification or detection results. In another example, the performance of one or more machine tasks may be estimated by the distance between the two sets of features extracted from the input data and the compressed data, where the distance may be mean square error (MSE) or mean absolute error (MAE) and the feature extractor may be a neural network that is pretrained for a generic task, such as classification, or a sub-network of a pretrained neural network.
Thus, in the present embodiments, a VCM codec uses an existing video codec as a base layer codec. The base-layer codec may be a codec that may have been developed and/or optimized for human consumption. Embodiments are not limited to any specific base-layer encoder, decoder, or bitstream format. Embodiments may be applied, for example, to AVC/H.264, HEVC/H.265, WC/H.266, or AV1 (specified by the Alliance of Open Media) as a base-layer encoder, decoder, and/or bitstream format. The VCM codec may also compromise one or more pre-processing filters at the encoder side and one or more post-processing and/or in-loop filters at the decoder side. The bitstream being sent from the VCM encoder to the decoder may compromise two parts. The first part, named base-layer bitstream, is the bitstream generated by the base-layer encoder, and the second part, named VCM bitstream, contains extra information to enhance the base-layer bitstream for the machine tasks.
Since the base-layer codec may be optimized for human consumption, the coding settings and configurations may not be optimal for machine tasks. For example, the
one or more in-loop filters used in the base-layer decoder may generate reconstructed frames that degrade the performance of the machine tasks.
To improve the performance of the system for machine tasks without breaking the conformance of the base-layer codec, a moderator component is introduced. The input to the moderator component is the base-layer bitstream generated by the base-layer encoder. The moderator component may parse the base-layer bitstream, modify the settings that may improve the performance for the machine tasks, and output a modified bitstream. The modified bitstream may be processed by the baselayer decoder to generate reconstructed input video. The modification to the bitstream by the moderator component does not impact the conformance to the existing standard or specification of the base-layer codec.
In an embodiment, the moderator modifies the base-layer bitstream so that the modified bitstream causes switching one or more in-loop filters on/off in the baselayer decoder.
In an embodiment, the moderator modifies the base-layer bitstream so that the modified bitstream causes switching one or more post-processing filters on/off in the base-layer decoder.
In an embodiment, the modifications may include one or more of the following:
- switching on or off the adaptive loop filter (ALF) by setting the parameters so that it behaves as an identity filter;
- switching on or off one or more other in-loop filters, for example, crosscomponent adaptive loop filter, deblocking filter, sample adaptive offset (SAO) filter, bilateral filter, and a neural-network-based filter;
- switching on or off one or more post-processing filters.
In an embodiment, the moderator modifies the base-layer bitstream so that the modified bitstream causes a different setting of one or more in-loop or postprocessing filters in the base-layer decoder.
In an embodiment, the modifications may include one or more of the following:
- adjusting the parameters of the ALF filter for the machine tasks;
- adjusting the parameter of one or more other in-loop filters, for example, cross-component adaptive loop filter, deblocking filter, SAO filter, bilateral filter, and a neural network-based filter;
- adjusting the parameters of one or more post-processing filters.
In an embodiment, the modifications adjusting the parameters of the ALF filter for the machine tasks comprise one or both of the following:
- adjusting the parameters of the ALF filter to cause sharpening of blocks classified to comprise edges;
- adjusting the parameter of the ALF filter to cause blurring or smoothening of blocks that are classified to comprise texture.
In an embodiment, the moderator modifies the base-layer bitstream so that the modified bitstream causes a different setting for the base-layer decoder in addition to or instead of switching on or off or modifying settings of in-loop or post-processing filters. The modifications may comprise, but may not be limited to, one or more of the following:
- turning luma mapping on or off;
- adjusting the parameters of luma mapping;
- turning chroma scaling on or off;
- adjusting the parameters of chroma scaling;
- adjusting the scaling of transform coefficients for the reconstruction of a prediction residual.
In an embodiment, the modifications adjusting the parameters of luma mapping comprise the following:
- Selecting luma value range(s) that are expected to have importance for machine task accuracy or alike. In some embodiments, this may be performed as a part of a process for generating the moderator control information, while in other embodiments VCM moderator may perform this.
- Adjusting inverse mapping of the luma value range(s) of importance such that they occupy a greater luma value range than prior to the modification.
According to an embodiment, the moderator component receives moderator control information in or along the VCM bitstream, wherein the moderator control information controls how the base-layer bitstream is modified.
In various embodiments, the moderator control information may comprise, but may not be limited to, one or more of the following:
- on or off signal for one or more in-loop filters;
- parameters, such as filter coefficients, for one or more in-loop filters;
- on or off signal for one or more post-processing filters;
- parameters, such as filter coefficients, for one or more post-processing filters;
- on or off signal for luma mapping;
- parameters for luma mapping;
- on or off signal for chroma scaling;
- parameters for chroma scaling;
- scaling matrix for transform coefficients; wherein in-loop filters and/or post-processing filters may comprise, but may not be limited to, one or more of the following: adaptive loop filters, cross-component adaptive loop filters, deblocking filter, SAO filter, bilateral filter, neural network - based in-loop filters.
In various embodiments, the base-layer bitstream may have syntax structures and/or syntax elements corresponding to the modifications and/or the moderator control information. These syntax structures and/or syntax elements may be referred to as moderated base-layer parameters.
According to an embodiment, a moderator component resides in the VCM decoder side to manipulate the bitstream received from the encoder.
According to an embodiment, the moderator component signals relevant modifications to one or more post-processing filters to generate outputs for machine and/or human tasks.
Figure 9 shows a system architecture, where a VCM moderator is at the VCM decoder side. It needs to be understood that the system architecture presents modules and signals that are optional, including for example the VCM preprocessing filter and the VCM-post-processing filter. Input data x is processed by the VCM pre-processing filter to generate an input to the base-layer encoder. The base-layer encoder encodes the input into a base-layer bitstream. At the decoder side, the VCM moderator may parse the base-layer bitstream, modify the bitstream, and output the modified bitstream. Next, the base-layer decoder decodes the modified base-layer bitstream and outputs data y, which may be regarded as the
reconstructed data corresponding to the input data x. Next, the one or more VCM post-processing filters may process y to generate output z for one or more machine tasks. The one or more VCM post-processing filters may also output reconstructed data for human consumption. In such architecture, the VCM pre-processing filter may signal moderator control information to the VCM moderator in or along the VCM bitstream. In another alternative, a VCM encoder may comprise control logic or alike that signals moderator control information in or along the VCM bitstream to the VCM moderator.
Besides signaling to the VCM moderator, the VCM pre-processing filter may also signal post-filter control information to assist the VCM post-processing filter to generate output for the machine tasks. In another alternative, a VCM encoder may comprise control logic or alike that signals post-filter control information in or along the VCM bitstream to the VCM post-processing filter to generate output for the machine tasks.
In various embodiments, the post-filter control information may comprise, but may not be limited to, one or more of the following:
- on or off signal for one or more post-processing filters;
- parameters, such as filter coefficients, for one or more post-processing filters; wherein post-processing filters may comprise, but may not be limited to, one or more of the following: ALF applied as post-filter, cross-component ALF applied as postfilter, Wiener filters, cross-component Wiener filters, deblocking filter, SAO filter, bilateral filter, neural-network-based post-processing filters.
In an embodiment, the moderator at the decoder side may output more than one bitstream. One bitstream may be processed by a standard base-layer decoder to generate reconstructed frames for human consumption. Other bitstreams may be processed by standard base-layer decoders, and the VCM post-processing filters to generate reconstructed frames for machine consumptions.
According to another embodiment, a moderator component may be in the VCM encoder side to manipulate the bitstream. The modification to the bitstream may be signaled from the encoder to the VCM decoder for the VCM post-processing filter to generate reconstructed data for the machine tasks.
Figure 10 shows a system architecture where a VCM moderator is at the VCM encoder side. It needs to be understood that the system architecture presents modules and signals that are optional, including for example the VCM preprocessing filter and the VCM-post-processing filter. Input data x is processed by the VCM pre-processing filter to generate an input to the base-layer encoder. The base-layer encoder encodes the input into a base-layer bitstream. The VCM moderator may parse the base-layer bitstream, modify the bitstream, and output the modified bitstream. On the decoder side, the base-layer decoder decodes the modified base-layer bitstream and output data y, which may be regarded as the reconstructed data corresponding to the input data x. Next, the one or more VCM post-processing filters may process y to generate output z for one or more machine tasks. The one or more VCM post-processing filters may also output reconstructed data for human consumption. In this architecture, one or more VCM pre-processing filters may signal moderator control information to the VCM moderator at the encoder side. In another alternative, a VCM encoder may comprise control logic or alike that signals moderator control information to the VCM moderator.
The VCM moderator may send post-filter control information in or along a VCM bitstream to assist the VCM post-processing filter to generate the output for the machine tasks.
Besides signaling to the VCM moderator, the VCM pre-processing filter may also signal post-filter control information to assist the VCM post-processing filter to generate output for the machine tasks. In another alternative, a VCM encoder may comprise control logic or alike that signals post-filter control information in or along the VCM bitstream to the VCM post-processing filter to generate output for the machine tasks.
Figure 11 shows a VCM architecture where more than one base-layer decoder is used at the VCM decoder side. The VCM moderator component may output more than one bitstream. One bitstream may be identical to the bitstream generated by the base-layer encoder. This bitstream may be processed by a base-layer decoder to generate reconstructed input data for human consumption. Other bitstreams may be modified by the VCM moderator for better performance for machine tasks.
Figures 9, 10, and 11 illustrate a VCM encoder that comprises a base-layer encoder. It needs to be understood that embodiments may be similarly realized when base-
layer encoding happens before VCM-aware processing and a VCM encoder gets a base-layer bitstream as input. Additionally, a VCM encoder may get reconstructed base-layer pictures as input or may decode the base-layer bitstream to obtain reconstructed base-layer pictures. The VCM encoder encodes moderator control information and/or post-filter control information in or along the VCM bitstream. Additionally, the VCM encoder may include the base-layer bitstream in or along the VCM bitstream.
In an embodiment, the moderator may be a NN-based system that is trained for optimizing the modifications performed by the VCM moderator and/or the moderated base-layer parameters of the base-layer bitstream for one or more machine vision tasks.
In an embodiment, more than one VCM moderator may be used in the encoder and/or decoder side for optimizing the base-layer bitstream for more than one machine vision tasks.
In an embodiment, the VCM moderator may manipulate the parameters of the baselayer bitstream for all pictures. In another embodiment, the VCM moderator manipulates the parameters of the base-layer bitstream for a subset of pictures. The method of selecting the subset of pictures may comprise, but may not be limited to, one or more of the following:
- The subset of pictures may comprise pictures that are not used as reference pictures or are not stored in the reference picture buffer of the base-layer codec. Consequently, the difference between a reconstructed picture of the base-layer encoder compared to the respective reconstructed picture of the base-layer decoder will not propagate temporally.
- The subset of pictures may comprise pictures preceding a random access point (RAP) picture in decoding order. Consequently, the difference between reconstructed pictures of the base-layer encoder compared to the respective reconstructed pictures of the base-layer decoder is limited to the subset of pictures and will not propagate to the RAP picture.
- The subset of pictures may comprise pictures at selected highest temporal sublayers. Consequently, the difference between reconstructed pictures of the base-layer encoder compared to the respective reconstructed pictures of the base-layer decoder is limited to certain temporal sublayers only.
- When the base-layer bitstream comprises several scalability layers (e.g. for spatial or quality scalability), the subset of pictures may comprise pictures at selected highest scalability layers. Consequently, the difference between reconstructed pictures of the base-layer encoder compared to the respective reconstructed pictures of the base-layer decoder is limited to certain scalability layers only.
- The subset of pictures may be selected based on the quality of the pictures. For example, if the picture quality is high, it may be determined to exclude a picture from the subset of pictures, and if the picture quality is moderate or low, it may be determined to include a picture in the subset of pictures. A picture quality may be estimated e.g. based on one or more quantization parameter values, which may indicate a quantization step size for prediction residual.
For example, the VCM moderator may modify the settings of the one or more inloop filters only to the one or more non-reference frames.
In an embodiment, the VCM pre-processing filter or the control logic of the VCM encoder or alike includes information indicative of the subset of pictures to be modified by a VCM moderator in moderator control information.
In an embodiment, the VCM moderator receives, within moderator control information, information indicative of the subset of pictures to be modified. The VCM moderator interprets the information indicative of the subset of pictures to be modified and modifies the subset of pictures.
It needs to be understood that the information indicative of the subset of pictures to be modified by a VCM moderator may indicate a second subset of pictures that are not to be modified by a VCM moderator, from which the remaining pictures may be concluded to form the subset of pictures.
In another embodiment, the VCM moderator may be applied to one or more components of image/video picture(s). For example, it may be applied to one or more of the components of the content in RGB, YUV or any other color spaces. For example, the VCM moderator may be applied to luma (Y) only, whereas chroma (U, V) can be left unmodified.
In an embodiment, the VCM moderator may be applied to certain parts of the picture(s). For example, it may change the one or more of the moderated base-layer parameters in prediction unit (PU), coding unit (CU), coding tree unit (CTU), slice, tile, subpicture, or picture levels.
In an embodiment, the VCM encoder includes moderator control information and/or post-filter control information in a VCM bitstream together with a base-layer bitstream. In an embodiment, the VCM decoder decodes moderator control information and/or post-filter control information from a VCM bitstream that also contains a base-layer bitstream. In these embodiments, the VCM bitstream may be structured in a way that it encapsulates the base-layer bitstream and contains the moderator control information and/or the post-filter control information in syntax structures separate from the encapsulated base-layer bitstream. For example, a VCM bitstream may comprise a sequence of VCM units. A VCM unit may comprise a VCM unit header and a VCM unit payload. The VCM unit header may comprise a type syntax element that indicates the type of data contained in the VCM unit payload, wherein the types may comprise, but may not be limited to, one or more of the following: a portion of the base-layer bitstream, moderator control information, post-filter control information. A portion of the base-layer bitstream may for example be a NAL unit or an RBSP according to a video coding standard, such as H.264, H.265, or H.266. VCM unit boundaries in the sequence of VCM units may be concluded for example in one of the following ways: a VCM unit length (e.g. in bytes) may be included in the VCM unit header; a VCM unit header may start or may be immediately preceded by a start code that it is guaranteed or unlikely to appear in locations other than VCM unit boundaries in the VCM bitstream.
In an embodiment, the VCM encoder includes moderator control information and/or post-filter control information in one or more SEI messages (which may be called a moderator control information SEI message and/or post-filter control information SEI message, respectively) or alike within a base-layer bitstream. In an embodiment, the VCM decoder decodes moderator control information and/or post-filter control information from one or more SEI messages or alike contained in a base-layer bitstream.
A uniform resource identifier (URI) may be defined as a string of characters used to identify a name of a resource. Such identification enables interaction with representations of the resource over a network, using specific protocols. A URI is
defined through a scheme specifying a concrete syntax and associated protocol for the URL The uniform resource locator (URL) and the uniform resource name (URN) are forms of URL A URL may be defined as a URI that identifies a web resource and specifies the means of acting upon or obtaining the representation of the resource, specifying both its primary access mechanism and network location. A URN may be defined as a URI that identifies a resource by name in a particular namespace. A URN may be used for identifying a resource without implying its location or how to access it.
In an embodiment, a moderator control SEI message or alike may comprise, but may not be limited to, one or more of the following:
- One or more syntax elements indicating one or more machine tasks for which this moderator control SEI message provides suitable modifications of the base-layer bitstream. Such a syntax element may for example be an index to a pre-defined enumerated list of machine tasks or a URI identifying a machine task.
- One or more syntax elements identifying one or more machine task NNs or algorithms for which this moderator control SEI message provides suitable modifications of the base-layer bitstream. Such a syntax element may for example be a URI identifying a machine task NN.
- One or more syntax elements for carrying moderator control information. For example, the SEI message or alike may comprise syntax elements carrying parameters, such as filter coefficients, of a particular in-loop filter. For example, the SEI message or alike may comprise one or more adaptation_parameter_set_rbsp( ) syntax structures, carrying ALF APSs that the VCM moderator may use to replace the respective ALF APS NAL units in the base-layer bitstream.
In an embodiment, a post-filter control SEI message or alike may comprise, but may not be limited to, one or more of the following:
- One or more syntax elements indicating one or more machine tasks for which this post-filter control SEI message provides or characterizes a suitable postprocessing filter. Such a syntax element may for example be an index to a pre-defined enumerated list of machine tasks or a URI identifying a machine task.
- One or more syntax elements identifying one or more machine task NNs or algorithms for which this post-filter control SEI message provides or
characterizes a suitable post-filter. Such a syntax element may for example be a URI identifying a machine task NN.
- One or more syntax elements for post-filter control information. For example, the SEI message or alike may comprise syntax elements carrying parameters, such as filter coefficients, of a particular post-processing filter. For example, the SEI message or alike may identify or carry a neural network post-processing filter.
In yet another embodiment, the modifications, for example, the parameters for one or more in-loop filters and/or post-processing filters, to the base-layer bitstream or part of such information could become available via some “out-of-band” mechanism. In an embodiment, information for fetching such data could be signaled or sent via SEI messages.
In an embodiment, the VCM encoder indicates moderator control information and/or post-filter control information along a VCM bitstream or a base-layer bitstream. In an embodiment, the VCM decoder decodes moderator control information and/or post-filter control information along a VCM bitstream or a base-layer bitstream. Mechanisms for indicating information along a VCM or base-layer bitstream may comprise, but may not be limited to, one or more of the following:
- a first track carries moderator control information and/or post-filter control information, and a second track carries a VCM or base-layer bitstream, wherein a track may conform or be similar to ISO base media file format track;
- metadata, such as a sample group or sample auxiliary information conforming or similar to the ISO base media file format, carries moderator control information and/or post-filter control information and is associated with a track carrying a VCM or base-layer bitstream, wherein a track may, for example, conform or be similar to ISO base media file format track;
- a first representation or stream carries moderator control information and/or post-filter control information, and a second representation or stream carries a VCM or base-layer bitstream, wherein a stream or representation may, for example, conform or be similar to a Representation of ISO/IEC 23009-1 (known as MPEG DASH) or a Real-time Transport Protocol stream as specified by the Internet Engineering Task Force (IETF);
- a media description carries moderator control information and/or post-filter control information, wherein a media description may, for example, conform
to or be similar to IETF Session Description Protocol or MPEG DASH Media Presentation Description.
In an embodiment, the system architecture may comprise more than two bitstreams/sub-bitstreams where one or more of the bitstreams may be used for human consumption purposes and one or more bitstreams may be used for one or more machine vision tasks. The bitstreams for both human and machine consumption may be coded with traditional codecs. Alternatively, one or more of bitstreams for human and/or machine consumption are coded with traditional codecs and one or more are coded with a neural network-based codecs. In this system, one or more moderator components may be used for bitstreams of traditional codecs in order to change/optimize one or more of the previously described parameters of the bitstream. An example of such system is illustrated in Figure 12.
According to previous embodiment, the moderators’ change in different bitstreams may be different.
In another embodiment, the NN-based moderator and the post-filter may be trained jointly.
The method for decoding according to an embodiment is shown in Figure 13a. The method generally comprises receiving 1350 an input bitstream, wherein the input bitstream comprises an encoded video stream generated by a base-layer encoder; and modifying 1360 the received input bitstream for improving task performance of one or more machine tasks. Each of the steps can be implemented by a respective module of a computer system.
An apparatus according to an embodiment comprises means for receiving an input bitstream, wherein the input bitstream comprises an encoded video stream generated by a base-layer encoder; and means for modifying the received input bitstream for improving task performance of one or more machine tasks. The means comprises at least one processor, and a memory including a computer program code, wherein the processor may further comprise processor circuitry. The memory and the computer program code are configured to, with the at least one processor, cause the apparatus to perform the method of Figure 13a according to various embodiments.
The method for encoding according to an embodiment is shown in Figure 13b. The method generally comprises receiving 1310 an input video sequence; encoding 1320 the input video sequence by a base-layer encoder to a bitstream comprising an encoded video stream; generating 1330 moderator control information for causing modifications of the encoded video stream for improving task performance of one or more machine tasks; and including 1340 the moderator control information in or along the bitstream.
An apparatus according to an embodiment comprises means for receiving an input video sequence; means for encoding the input video sequence by a base-layer encoder to a bitstream comprising an encoded video stream; means for generating moderator control information for causing modifications of the encoded video stream for improving task performance of one or more machine tasks; and means for including the moderator control information in or along the bitstream. The means comprises at least one processor, and a memory including a computer program code, wherein the processor may further comprise processor circuitry. The memory and the computer program code are configured to, with the at least one processor, cause the apparatus to perform the method of Figure 13b according to various embodiments.
An example of an apparatus is shown in Figure 14. The apparatus is a user equipment for the purposes of the present embodiments. The apparatus 90 comprises a main processing unit 91 , a memory 92, a user interface 94, a communication interface 93. The apparatus according to an embodiment, shown in Figure 13, may also comprise a camera module 95. Alternatively, the apparatus may be configured to receive image and/or video data from an external camera device over a communication network. The memory 92 stores data including computer program code in the apparatus 90. The computer program code is configured to implement the method according to various embodiments by means of various computer modules. The camera module 95 or the communication interface 93 receives data, in the form of images or video stream, to be processed by the processor 91. The communication interface 93 forwards processed data, i.e., the image file, for example to a display of another device, such a virtual reality headset. When the apparatus 90 is a video source comprising the camera module 95, user inputs may be received from the user interface.
The various embodiments can be implemented with the help of computer program code that resides in a memory and causes the relevant apparatuses to carry out the method. For example, a device may comprise circuitry and electronics for handling, receiving and transmitting data, computer program code in a memory, and a processor that, when running the computer program code, causes the device to carry out the features of an embodiment. Yet further, a network device like a server may comprise circuitry and electronics for handling, receiving and transmitting data, computer program code in a memory, and a processor that, when running the computer program code, causes the network device to carry out the features of various embodiments.
If desired, the different functions discussed herein may be performed in a different order and/or concurrently with other. Furthermore, if desired, one or more of the above-described functions and embodiments may be optional or may be combined.
Although various aspects of the embodiments are set out in the independent claims, other aspects comprise other combinations of features from the described embodiments and/or the dependent claims with the features of the independent claims, and not solely the combinations explicitly set out in the claims.
It is also noted herein that while the above describes example embodiments, these descriptions should not be viewed in a limiting sense. Rather, there are several variations and modifications, which may be made without departing from the scope of the present disclosure as, defined in the appended claims.
Claims
1 . An apparatus for decoding comprising
- means for receiving an input bitstream, wherein the input bitstream comprises an encoded video stream generated by a base-layer encoder; and
- means for modifying the received input bitstream for improving task performance of one or more machine tasks.
2. An apparatus according to claim 1 , further comprising:
- means for decoding the modified bitstream by a base-layer decoder; and
- means for applying one or more machine tasks to the decoded pictures.
3. An apparatus according to claim 1 or 2, wherein the means for modifying the received input bitstream comprise one or more of the following:
- means for switching an in-loop filter on or off;
- means for adjusting parameters of an in-loop filter.
4. An apparatus according to claim 3, wherein the means for adjusting parameters of an in-loop filter comprise one or both of the following:
- adjusting the parameters of the in-loop filter to cause sharpening of blocks classified to comprise edges;
- adjusting the parameter of the in-loop filter to cause blurring or smoothening of blocks that are classified to comprise texture.
5. An apparatus according to any of the claims 1 to 4 further comprising:
- means for receiving moderator control information; and
- means for modifying the received bitstream based on the moderator control information.
6. An apparatus according to claim 2, further comprising:
- means for filtering the decoded pictures prior to applying one or more machine tasks.
7. An apparatus according to claim 6 further comprising:
- means for receiving post-filter control information; and
- means for filtering the decoded pictures based on the post-filter control information.
47
8. An apparatus according to any of the preceding claims 1 to 7, further comprising:
- means for receiving and interpreting information indicative of a subset of pictures subject to be modified for improving task performance of one or more machine tasks.
9. An apparatus according to any of the claims 1 to 8, wherein the input bitstream comprises a base-layer bitstream and a second bitstream, wherein the second bitstream comprises information to modify the base-layer bitstream or to filter decoded pictures.
10. An apparatus according to any of the claims 1 to 8, wherein the input bitstream comprises a base-layer bitstream and the base-layer bitstream comprises supplemental information to modify the base-layer bitstream or to filter decoded pictures.
11 .An apparatus for encoding, comprising
- means for receiving an input video sequence;
- means for encoding the input video sequence by a base-layer encoder to a bitstream comprising an encoded video stream;
- means for generating moderator control information for causing modifications of the encoded video stream for improving task performance of one or more machine tasks; and
- means for including the moderator control information in or along the bitstream.
12. An apparatus according to claim 11 , wherein the caused modifications of the encoded video stream comprise one or more of the following:
- switching an in-loop filter on or off;
- adjusting parameters of an in-loop filter.
13. An apparatus according to claim 12, wherein adjusting parameters of an inloop filter comprise one or both of the following:
- adjusting the parameters of the in-loop filter to cause sharpening of blocks classified to comprise edges;
48
- adjusting the parameter of the in-loop filter to cause blurring or smoothening of blocks that are classified to comprise texture.
14. An apparatus according to any of the claims 11 to 13 further comprising:
- means for generating post-filter control information for adjusting filtering of decoded pictures for improving task performance of one or more machine tasks.
15. An apparatus according to any of the claims 11 to 14 further comprising:
- means for generating information indicative of a subset of pictures subject to be modified for improving task performance of one or more machine tasks.
16. An apparatus according to any of the claims 11 to 15, wherein the bitstream comprises a base-layer bitstream and a second bitstream, wherein the second bitstream comprises information to modify the base-layer bitstream or to filter decoded pictures.
17. An apparatus according to any of the claims 11 to 15, wherein the bitstream comprises a base-layer bitstream and the base-layer bitstream comprises supplemental information to modify the base-layer bitstream or to filter decoded pictures.
18. A method for decoding, comprising
- receiving an input bitstream, wherein the input bitstream comprises an encoded video stream generated by a base-layer encoder; and
- modifying the received input bitstream for improving task performance of one or more machine tasks.
19. A method for encoding, comprising
- receiving an input video sequence;
- encoding the input video sequence by a base-layer encoder to a bitstream comprising an encoded video stream;
- generating moderator control information for causing modifications of the encoded video stream for improving task performance of one or more machine tasks; and
- including the moderator control information in or along the bitstream.
20. An apparatus for decoding comprising at least one processor, memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following:
- receive an input bitstream, wherein the input bitstream comprises an encoded video stream generated by a base-layer encoder; and
- modify the received input bitstream for improving task performance of one or more machine tasks.
21. An apparatus for encoding comprising at least one processor, memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following:
- receive an input video sequence;
- encode the input video sequence by a base-layer encoder to a bitstream comprising an encoded video stream;
- generate moderator control information for causing modifications of the encoded video stream for improving task performance of one or more machine tasks; and
- include the moderator control information in or along the bitstream.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FI20216267 | 2021-12-13 | ||
FI20216267 | 2021-12-13 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023111384A1 true WO2023111384A1 (en) | 2023-06-22 |
Family
ID=86773676
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/FI2022/050732 WO2023111384A1 (en) | 2021-12-13 | 2022-11-08 | A method, an apparatus and a computer program product for video encoding and video decoding |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2023111384A1 (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160255355A1 (en) * | 2013-10-11 | 2016-09-01 | Telefonaktiebolaget Lm Ericsson (Publ) | Method and arrangement for video transcoding using mode or motion or in-loop filter information |
US20170078704A1 (en) * | 2014-03-04 | 2017-03-16 | Sagemcom Broadband Sas | Method for modifying a binary video stream |
-
2022
- 2022-11-08 WO PCT/FI2022/050732 patent/WO2023111384A1/en unknown
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160255355A1 (en) * | 2013-10-11 | 2016-09-01 | Telefonaktiebolaget Lm Ericsson (Publ) | Method and arrangement for video transcoding using mode or motion or in-loop filter information |
US20170078704A1 (en) * | 2014-03-04 | 2017-03-16 | Sagemcom Broadband Sas | Method for modifying a binary video stream |
Non-Patent Citations (3)
Title |
---|
SHENG-PO WANG (ITRI), CHING-CHIEH LIN (ITRI), CHUN-LUNG LIN (ITRI): "[VCM] A study on impact of coding tools on machine vision performance and visual quality", 134. MPEG MEETING; 20210426 - 20210430; ONLINE; (MOTION PICTURE EXPERT GROUP OR ISO/IEC JTC1/SC29/WG11), 23 April 2021 (2021-04-23), pages 1 - 7, XP030295525 * |
WEN GAO (TENCENT), XIAOZHONG XU, SHAN LIU: "[VCM] Response to CfE: Investigation of VVC Codec for Video Coding for Machine", 134. MPEG MEETING; 20210426 - 20210430; ONLINE; (MOTION PICTURE EXPERT GROUP OR ISO/IEC JTC1/SC29/WG11), 22 April 2021 (2021-04-22), pages 1 - 8, XP030295217 * |
WENHAN ZHANG (GYRFALCONTECH), LU YU, LINGYU DUAN, YUAN ZHANG, PATRICK DONG, LIN YANG: "[VCM] Hybrid Framework for combined human and machine vision", 129. MPEG MEETING; 20200113 - 20200117; BRUSSELS; (MOTION PICTURE EXPERT GROUP OR ISO/IEC JTC1/SC29/WG11), 12 January 2020 (2020-01-12), pages 1 - 3, XP030225138 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11375204B2 (en) | Feature-domain residual for video coding for machines | |
US20240314362A1 (en) | Performance improvements of machine vision tasks via learned neural network based filter | |
WO2023135518A1 (en) | High-level syntax of predictive residual encoding in neural network compression | |
EP4360000A1 (en) | Method, apparatus and computer program product for defining importance mask and importance ordering list | |
WO2022238967A1 (en) | Method, apparatus and computer program product for providing finetuned neural network | |
WO2022224113A1 (en) | Method, apparatus and computer program product for providing finetuned neural network filter | |
WO2022269415A1 (en) | Method, apparatus and computer program product for providng an attention block for neural network-based image and video compression | |
WO2022195409A1 (en) | Method, apparatus and computer program product for end-to-end learned predictive coding of media frames | |
US20230325639A1 (en) | Apparatus and method for joint training of multiple neural networks | |
EP4142289A1 (en) | A method, an apparatus and a computer program product for video encoding and video decoding | |
WO2023208638A1 (en) | Post processing filters suitable for neural-network-based codecs | |
WO2023031503A1 (en) | A method, an apparatus and a computer program product for video encoding and video decoding | |
US20240223762A1 (en) | A method, an apparatus and a computer program product for video encoding and video decoding | |
US20230186054A1 (en) | Task-dependent selection of decoder-side neural network | |
WO2023111384A1 (en) | A method, an apparatus and a computer program product for video encoding and video decoding | |
WO2024002579A1 (en) | A method, an apparatus and a computer program product for video coding | |
WO2023194650A1 (en) | A method, an apparatus and a computer program product for video coding | |
WO2024068081A1 (en) | A method, an apparatus and a computer program product for image and video processing | |
WO2024074231A1 (en) | A method, an apparatus and a computer program product for image and video processing using neural network branches with different receptive fields | |
WO2024068190A1 (en) | A method, an apparatus and a computer program product for image and video processing | |
WO2024061508A1 (en) | A method, an apparatus and a computer program product for image and video processing using a neural network | |
WO2023151903A1 (en) | A method, an apparatus and a computer program product for video coding | |
EP4424014A1 (en) | A method, an apparatus and a computer program product for video coding | |
US20240121387A1 (en) | Apparatus and method for blending extra output pixels of a filter and decoder-side selection of filtering modes | |
WO2024208609A1 (en) | A method, an apparatus and a computer program product for image and video processing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22906727 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |