WO2016111199A1 - 画像処理装置、画像処理方法、およびプログラム、並びに記録媒体 - Google Patents
画像処理装置、画像処理方法、およびプログラム、並びに記録媒体 Download PDFInfo
- Publication number
- WO2016111199A1 WO2016111199A1 PCT/JP2015/086202 JP2015086202W WO2016111199A1 WO 2016111199 A1 WO2016111199 A1 WO 2016111199A1 JP 2015086202 W JP2015086202 W JP 2015086202W WO 2016111199 A1 WO2016111199 A1 WO 2016111199A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- picture
- unit
- information
- image processing
- encoded data
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 10
- 230000000153 supplemental effect Effects 0.000 claims abstract description 6
- 238000000034 method Methods 0.000 claims description 61
- 230000010365 information processing Effects 0.000 claims description 8
- 230000009466 transformation Effects 0.000 description 50
- 230000008569 process Effects 0.000 description 44
- 238000013139 quantization Methods 0.000 description 39
- 230000006870 function Effects 0.000 description 26
- 230000003287 optical effect Effects 0.000 description 25
- 230000008707 rearrangement Effects 0.000 description 23
- 238000009825 accumulation Methods 0.000 description 16
- 238000006243 chemical reaction Methods 0.000 description 15
- 238000010586 diagram Methods 0.000 description 15
- 238000000605 extraction Methods 0.000 description 15
- 230000003044 adaptive effect Effects 0.000 description 9
- 238000000926 separation method Methods 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 239000000203 mixture Substances 0.000 description 4
- 230000002093 peripheral effect Effects 0.000 description 4
- VYQRBKCKQCRYEE-UHFFFAOYSA-N ctk1a7239 Chemical compound C12=CC=CC=C2N2CC=CC3=NC=CC1=C32 VYQRBKCKQCRYEE-UHFFFAOYSA-N 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 102100037812 Medium-wave-sensitive opsin 1 Human genes 0.000 description 1
- 239000000945 filler Substances 0.000 description 1
- 238000009472 formulation Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 229920000069 polyphenylene sulfide Polymers 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 208000031509 superficial epidermolytic ichthyosis Diseases 0.000 description 1
- 238000011426 transformation method Methods 0.000 description 1
- 238000004804 winding Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
- H04N19/31—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability in the temporal domain
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/44—Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/236—Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
- H04N21/2362—Generation or processing of Service Information [SI]
Definitions
- the present disclosure relates to an image processing apparatus, an image processing method, a program, and a recording medium, and in particular, an image processing apparatus, an image processing method, a program, and a recording in which the granularity of trick play can be easily controlled. It relates to the medium.
- BD Blu-ray (registered trademark) Disc
- AVC Advanced Video Coding
- ES Simple Stream
- the playback device can easily perform trick play such as fast forward playback and rewind playback by using the GOP structure map.
- the playback device recognizes I-pictures in the GOP based on the type of each picture constituting the GOP described in the GOP structure map, and parses only the I-pictures to facilitate fast-forward playback or winding. It is possible to play back.
- next-generation BD standard BDA (Blu-ray (registered trademark) Disc Association).
- HEVC high efficiency video coding
- the present disclosure has been made in view of such circumstances, and enables easy control of trick play granularity.
- the image processing apparatus is an image processing apparatus including a setting unit configured to set additional information of encoded data of the picture including reference layer information representing a layer of reference relation of the picture.
- the image processing method and program of the first aspect of the present disclosure correspond to the image processing device of the first aspect of the present disclosure.
- additional information of coded data of the picture including reference layer information representing a layer of reference relationship of the picture is set.
- the image processing apparatus selects a picture to be reproduced based on reference layer information representing a layer of a reference relation of the picture, which is included in additional information of encoded data of the picture.
- An image processing apparatus comprising:
- the image processing method and program of the second aspect of the present disclosure correspond to the image processing device of the second aspect of the present disclosure.
- a picture to be reproduced is selected based on reference layer information representing a layer of a reference relation of the picture, which is included in additional information of encoded data of the picture.
- the recording medium records a coded stream including additional information of coded data of the picture including reference layer information representing a layer of reference relationship of the picture, and the coded data.
- a recording medium that is attached to and reproduced from an information processing apparatus, and causes the information processing apparatus that has acquired the encoded stream to select a picture to be reproduced based on the reference layer information included in the additional information. It is a medium.
- a coded stream including encoded data of the encoded data of the picture including reference layer information representing a layer of the reference relationship of the picture and the encoded data is recorded, and information processing is performed. It is attached to the device and played back.
- the granularity of trick play can be easily controlled.
- FIG. 1 is a block diagram showing a configuration example of an embodiment of a recording and reproduction system to which the present disclosure is applied. It is a figure which shows the example of the directory structure of the file recorded on an optical disk. It is a block diagram showing an example of composition of a file generation part. The structural example of AU of a head picture is shown. It is a figure which shows the example of the syntax of GOP structure map. It is a figure explaining reference layer information. It is a block diagram which shows the structural example of the imaging
- FIG. It is a flowchart explaining the detail of the encoding process of FIG. It is a block diagram showing an example of composition of a fast forward reproduction part. It is a block diagram which shows the structural example of the decoding part of FIG. It is a flowchart explaining the fast forward reproduction process of the fast forward reproduction part of FIG. It is a flowchart explaining the detail of the decoding process of FIG. It is a block diagram showing the example of composition of the hardware of a computer.
- FIG. 1 is a block diagram showing a configuration example of an embodiment of a recording and reproduction system to which the present disclosure is applied.
- the recording and reproducing system of FIG. 1 includes a recording device 1, a reproducing device 2, and a display device 3.
- the playback device 2 and the display device 3 are connected via an HDMI (registered trademark) (High Definition Multimedia Interface) cable 4.
- the playback device 2 and the display device 3 may be connected via a cable of another standard, or may be connected via wireless communication.
- the recording device 1 records content such as video and audio, and the reproduction device 2 reproduces the content. Provision of content from the recording device 1 to the reproducing device 2 is performed using an optical disc 11 (recording medium) mounted on the recording device 1 and the reproducing device 2 (information processing device).
- the optical disc 11 is a disc on which content is recorded in a format according to a BD-ROM (Read Only Memory) format. Therefore, the recording device 1 is, for example, a device used by the content author.
- the optical disc 11 may be a disc on which content is recorded in a format conforming to another format such as BD-R, -RE. Also, provision of content from the recording device 1 to the reproducing device 2 may be performed using removable media other than an optical disk, such as a memory card equipped with a flash memory.
- optical disk 11 on which the content is recorded by the recording device 1 is appropriately described as being provided to the reproduction device 2, in reality, the optical disk is copied based on the master board on which the content is recorded by the recording device 1. An optical disk 11 which is one of them is provided to the reproduction apparatus 2.
- Video data, audio data, and the like are input to the recording device 1 (image processing device).
- the recording device 1 encodes these data to generate ES, and multiplexes the data to generate an AV stream which is one TS (Transport Stream).
- the recording device 1 records the generated AV stream and the like on the optical disc 11.
- the playback device 2 (image processing device) drives the drive and reads the AV stream recorded on the optical disc 11.
- the playback device 2 separates the AV stream into a video stream that is ES of video data and an audio stream that is ES of audio data, and decodes the AV stream.
- the playback device 2 outputs the video data and audio data obtained by decoding to the display device 3.
- the display device 3 receives the video data transmitted from the reproduction device 2 and displays a video on a built-in monitor based on the video data. Further, the display device 3 receives audio data transmitted from the reproduction device 2 and outputs audio from a built-in speaker based on the audio data.
- FIG. 2 is a diagram showing an example of the directory structure of files recorded on the optical disk 11 of FIG.
- Each file recorded on the optical disk 11 is hierarchically managed by the directory structure.
- One root directory is created on the optical disk 11.
- the BDMV directory is placed under the root directory.
- an Index file which is a file to which the name "Index.bdmv” is set and a Movie Object file which is a file to which the name "MovieObject.bdmv” is set are stored.
- Index file for example, a list of title numbers recorded on the optical disc 11, and types and numbers of objects to be executed corresponding to the title numbers are described.
- objects There are two types of objects: movie objects (Movie Object) and BD-J objects (BD-J Object).
- a movie object is an object in which a navigation command, which is a command used for playing a PlayList, is described.
- the BD-J object is an object in which a BD-J application is described. Movie objects are described in the Movie Object file.
- BDMV directory Under the BDMV directory, a PLAYLIST directory, a CLIPINF directory, a STREAM directory, etc. are also provided.
- a PlayList file in which a PlayList described as playback management information for managing playback of an AV stream is stored.
- a name combining a 5-digit number and an extension ".mpls” is set.
- the file names “00000. mpls”, “00002. mpls”, and “00003. mpls” are respectively set in the three PlayList files in FIG.
- Clip Information on an AV stream of a predetermined unit is stored as a Clip Information file in the CLIPINF directory.
- a name combining a 5-digit number and an extension ".clpi” is set.
- the file names “01000.clpi”, “02000.clpi”, and “03000.clpi” are respectively set in the three Clip Information files in FIG.
- AV streams of predetermined units are stored as stream files.
- a name combining a 5-digit number and an extension ".m2ts” is set.
- the file names “01000.m2ts”, “02000.m2ts”, and “03000.m2ts” are respectively set in the three stream files in FIG.
- the Clip Information file and the stream file in which the same five-digit number is set as the file name are files constituting one Clip.
- the Clip Information file of "01000.clpi” is used, and when playing the stream file of "02000.m2ts", the Clip Information file of "02000.clpi" is used.
- FIG. 3 is a block diagram showing a configuration example of a file generation unit for generating a stream file in the recording device 1 of FIG.
- the file generation unit 50 in FIG. 3 includes a setting unit 51, a video encoding unit 52, a network abstraction layer (NAL) conversion unit 53, a multiplexing unit 54, a voice encoding unit 55, and a file conversion unit 56.
- NAL network abstraction layer
- the setting unit 51 of the file generation unit 50 sets parameter sets such as SPS (Sequence Parameter Set), PPS (Picture Parameter Set), SEI (additional information) and the like.
- Reference layer information which is a number representing a reference relationship layer (sub-layer) of all the pictures constituting the GOP including the leading picture in one of the SEI of the leading picture of the GOP (hereinafter referred to as the leading picture)
- a GOP structure map including is stored.
- the setting unit 51 supplies the set parameter set to the video encoding unit 52 and the NALization unit 53.
- Video data is input to the video encoding unit 52 in units of pictures.
- the video encoding unit 52 encodes each picture of the input video data in units of CU (Coding Unit) according to the HEVC method. At this time, the parameter set supplied from the setting unit 51 is used as needed.
- the video encoding unit 52 supplies, to the NALization unit 53, encoded data in units of slices of each picture obtained as a result of encoding.
- the NAL unit 53 NALs the parameter set supplied from the setting unit 51 and the encoded data supplied from the video encoding unit 52 to generate an NAL unit including a NAL header and a data unit.
- the NAL unit 53 supplies the generated NAL unit to the multiplexing unit 54.
- the multiplexing unit 54 groups the NAL units supplied from the NAL generation unit 53 in units of pictures to generate an AU (Access Unit).
- the multiplexing unit 54 supplies a video stream consisting of one or more AUs to the filing unit 56.
- Audio data is input to the audio encoding unit 55.
- the audio encoding unit 55 encodes the audio data, and supplies the audio stream obtained as a result to the filing unit 56.
- the filing unit 56 multiplexes the video stream (coded stream) supplied from the multiplexing unit 54 and the audio stream supplied from the audio coding unit 55 to generate an AV stream.
- the file conversion unit 56 converts the generated AV stream into a file to generate and output a stream file. This stream file is recorded on the optical disc 11.
- FIG. 4 shows a configuration example of an AU of the leading picture.
- one AU delimiter indicating the boundary of the AU is arranged at the beginning of the AU of the leading picture.
- the NAL unit of one SPS, the NAL unit of one or more PPSs, the NAL unit of one or more SEIs, and the NAL unit of encoded data of one or more slice units are arranged in order. Thereafter, filler data is arranged as needed. Then, if the first picture is the last picture in the sequence, an End of sequence indicating the end of the sequence is arranged, and if the first picture is the last picture of the video stream, the end of stream indicating the end of the video stream Be placed.
- a GOP structure map including reference layer information is stored in the NAL unit of SEI.
- FIG. 5 is a diagram showing an example of the syntax of the GOP structure map.
- number_of_pictures_in_GOP which is the number of pictures of the corresponding GOP.
- 5-bit shifting_bits in which 1 is set and picture_structure, temporal_id, and picture_type of the picture are described for each picture constituting the corresponding GOP.
- Picture_structure is a 3-bit value representing a frame structure when displaying a picture.
- the picture_structure indicates, for example, whether the frame rate at the time of display is the same as the frame rate of the video stream or twice or three times the frame rate of the video stream.
- Temporal_id is a 3-bit value obtained by subtracting 1 from nuh_temporal_id_plus1 representing reference layer information of the picture, which is included in the NAL header of the coded data of the picture.
- the reference layer information is information that is not adopted in the AVC method of the BD standard, but is adopted in the SVC (Scalable Video Coding Extension) method or the HEVC method.
- Picture_type (picture type information) is a 4-bit value representing the type of picture. For example, if the picture type is I picture, it is 1000 b, if it is a reference B picture, it is 1010 b; if it is a non-reference B picture, it is 0010 b;
- FIG. 6 is a diagram for explaining reference layer information.
- the number of layers in the reference relationship is three.
- the horizontal axis represents Display Order
- the vertical axis represents reference layer information.
- the squares represent pictures, the alphabets in the squares represent picture types, and the numbers represent reference layer information.
- the arrow represents a reference relation in which the picture represented by the square at the tip of the arrow refers to the original picture of the arrow.
- a picture can not refer to a picture of reference layer information that is larger than its own reference layer information.
- reference layer information of one I picture and two P pictures out of nine pictures constituting a GOP is zero. Also, reference layer information of two B pictures is 1, and reference layer information of four B pictures is 2.
- one I picture and two P pictures in which reference layer information is 0 can not refer to a B picture in which reference layer information is 1 or 2. Also, two B pictures whose reference layer information is 1 can not refer to four B pictures whose reference layer information is two.
- the playback device 2 can thin out and decode a picture whose reference layer information is larger than the threshold by selecting and decoding a picture whose reference layer information is equal to or less than the threshold. For example, the playback device 2 can thin out and decode six B pictures whose reference layer information is larger than 0 by selecting and decoding a picture whose reference layer information is 0 or less. Also, the playback device 2 can thin out and decode four B pictures whose reference layer information is larger than 1 by selecting and decoding a picture whose reference layer information is 1 or less. As a result, trick play can be performed.
- the reproduction device 2 can easily control the granularity of the trick play by changing the threshold value.
- the playback device when performing trick play based on picture_type, the playback device can only select and play only I pictures that do not refer to other pictures because the reference relationship is unknown.
- reference layer information is collectively described in a GOP structure map stored in SEI of the leading picture for one GOP including the leading picture. Ru.
- the playback device 2 can acquire the reference layer information of all the pictures constituting the GOP only by parsing the GOP structure map. Therefore, the playback device 2 parses and decodes only the AU of the picture whose reference layer information is equal to or less than the threshold among the pictures other than the leading picture in the GOP based on the acquired reference layer information. It is easy to do trick play.
- FIG. 7 is a block diagram showing a configuration example of the video encoding unit 52 of FIG.
- the video encoding unit 52 of FIG. 7 includes an A / D conversion unit 71, a screen rearrangement buffer 72, an operation unit 73, an orthogonal conversion unit 74, a quantization unit 75, a lossless encoding unit 76, an accumulation buffer 77, and an inverse quantization.
- a unit 79, an inverse orthogonal transform unit 80, and an addition unit 81 are included.
- the video encoding unit 52 includes a filter 82, a frame memory 85, a switch 86, an intra prediction unit 87, a motion prediction / compensation unit 89, a predicted image selection unit 92, and a rate control unit 93.
- the A / D conversion unit 71 of the video encoding unit 52 A / D converts the input analog signal of each picture, and outputs the digital signal of each picture after conversion to the screen rearrangement buffer 72 for storage.
- the screen rearrangement buffer 72 rearranges the pictures in the stored display order into the order for encoding in accordance with the GOP structure.
- the screen rearrangement buffer 72 outputs the rearranged picture to the arithmetic unit 73, the intra prediction unit 87, and the motion prediction / compensation unit 89 as a current picture.
- the calculation unit 73 performs coding in CU units by subtracting the predicted image supplied from the predicted image selection unit 92 from the current picture supplied from the screen rearrangement buffer 72.
- the computing unit 73 outputs the resulting picture to the orthogonal transform unit 74 as residual information.
- the calculation unit 73 outputs the current picture read from the screen rearrangement buffer 72 to the orthogonal transformation unit 74 as residual information as it is.
- the orthogonal transformation unit 74 orthogonally transforms the residual information from the computation unit 73 in units of TU (Transform Unit).
- the orthogonal transformation unit 74 supplies the orthogonal transformation coefficient obtained as a result of the orthogonal transformation to the quantization unit 75.
- the quantization unit 75 performs quantization on the orthogonal transformation coefficient supplied from the orthogonal transformation unit 74.
- the quantization unit 75 supplies the quantized orthogonal transformation coefficient to the lossless encoding unit 76.
- the lossless encoding unit 76 acquires, from the intra prediction unit 87, intra prediction mode information indicating the optimal intra prediction mode. Further, the lossless encoding unit 76 acquires, from the motion prediction / compensation unit 89, inter prediction mode information indicating the optimal inter prediction mode, reference picture identification information for identifying a reference picture, motion vector information, and the like. Furthermore, the lossless encoding unit 76 acquires offset filter information related to adaptive offset filter processing from the filter 82.
- the lossless coding unit 76 performs variable length coding (for example, Context-Adaptive Variable Length Coding (CAVLC) or the like), arithmetic coding (for example, for example) on the quantized orthogonal transformation coefficient supplied from the quantization unit 75.
- Variable length coding for example, Context-Adaptive Variable Length Coding (CAVLC) or the like
- arithmetic coding for example, for example
- Lossless coding such as CABAC (Context-Adaptive Binary Arithmetic Coding).
- the lossless encoding unit 76 losslessly encodes intra prediction mode information or inter prediction mode information, motion vector information, reference picture identification information, offset filter information, and the like as encoding information related to encoding.
- the lossless encoding unit 76 arranges the lossless encoded encoded information and the like in a slice header and the like in units of slices.
- the lossless encoding unit 76 adds a slice header to the lossless encoded orthogonal transform coefficient in slice units, and supplies the result to the accumulation buffer 77 as encoded data in slice units.
- the accumulation buffer 77 temporarily stores the slice unit encoded data supplied from the lossless encoding unit 76. Further, the accumulation buffer 77 supplies the stored encoded data in slice units to the NALization unit 53 in FIG.
- the quantized orthogonal transformation coefficient output from the quantization unit 75 is also input to the inverse quantization unit 79.
- the inverse quantization unit 79 performs inverse quantization on the orthogonal transformation coefficient quantized by the quantization unit 75 by a method corresponding to the quantization method in the quantization unit 75.
- the inverse quantization unit 79 supplies the orthogonal transformation coefficient obtained as a result of the inverse quantization to the inverse orthogonal transformation unit 80.
- the inverse orthogonal transformation unit 80 performs inverse orthogonal transformation on the orthogonal transformation coefficient supplied from the inverse quantization unit 79 on a TU basis by a method corresponding to the orthogonal transformation method in the orthogonal transformation unit 74.
- the inverse orthogonal transform unit 80 supplies the residual information obtained as a result to the addition unit 81.
- the addition unit 81 locally decodes the current picture in CU units by adding the residual information supplied from the inverse orthogonal transform unit 80 and the prediction image supplied from the prediction image selection unit 92. When the prediction image is not supplied from the prediction image selection unit 92, the addition unit 81 sets the residual information supplied from the inverse orthogonal transformation unit 80 as the decoding result. The addition unit 81 supplies the locally decoded current picture to the frame memory 85. Also, the adding unit 81 supplies the current picture, in which all the regions have been decoded, to the filter 82 as a coded picture.
- the filter 82 performs a filtering process on the encoded picture supplied from the adding unit 81. Specifically, the filter 82 sequentially performs deblocking filter processing and adaptive offset filter (SAO (Sample adaptive offset)) processing. The filter 82 supplies the filtered encoded picture to the frame memory 85. In addition, the filter 82 supplies the lossless encoding unit 76 with information indicating the type and the offset of the performed adaptive offset filter processing as the offset filter information.
- SAO sample adaptive offset
- the frame memory 85 stores the current picture supplied from the adding unit 81 and the encoded picture supplied from the filter 82.
- the pixel adjacent to the current block which is a processing target PU (Prediction Unit) of the current picture is supplied to the intra prediction unit 87 via the switch 86 as a peripheral pixel.
- the coded picture is output to the motion prediction / compensation unit 89 via the switch 86 as a reference picture candidate.
- the intra prediction unit 87 performs, for the current block, intra prediction processing in all candidate intra prediction modes using the neighboring pixels read from the frame memory 85 via the switch 86.
- the intra prediction unit 87 calculates the cost function for all candidate intra prediction modes based on the current picture read from the screen rearrangement buffer 72 and the predicted image generated as a result of the intra prediction process. Calculate a value (details will be described later). Then, the intra prediction unit 87 determines the intra prediction mode with the smallest cost function value as the optimal intra prediction mode.
- the intra prediction unit 87 supplies the predicted image generated in the optimal intra prediction mode and the corresponding cost function value to the predicted image selection unit 92.
- the intra prediction unit 87 supplies the intra prediction mode information to the lossless coding unit 76 when notified of the selection of the predicted image generated in the optimal intra prediction mode from the predicted image selection unit 92.
- the cost function value is also referred to as RD (Rate Distortion) cost. It is calculated based on the method of High Complexity mode or Low Complexity mode as defined in JM (Joint Model) which is reference software in the H.264 / AVC system. H. Reference software in the H.264 / AVC format is published at http://iphome.hhi.de/suehring/tml/index.htm.
- the process is tentatively performed for all candidate prediction modes, and a cost function represented by the following equation (1) A value is calculated for each prediction mode.
- D is a difference (distortion) between an original image and a decoded image
- R is a generated code amount including up to orthogonal transform coefficients
- ⁇ is a Lagrange undetermined multiplier given as a function of the quantization parameter QP.
- D is a difference (distortion) between an original image and a predicted image
- Header_Bit is a code amount of encoded information
- QPtoQuant is a function given as a function of a quantization parameter QP.
- the motion prediction / compensation unit 89 performs motion prediction / compensation processing on all candidate inter prediction modes for the current block using the reference picture candidate. Specifically, the motion prediction / compensation unit 89 is all candidates based on the current picture from the screen rearrangement buffer 72 and the reference picture candidate read from the frame memory 85 via the switch 86. The motion vector of the current block in inter prediction mode is detected. The inter prediction mode is a mode that represents the size of the current block. The motion prediction / compensation unit 89 performs a compensation process on the reference picture candidate based on the detected motion vector, and generates a predicted image.
- the motion prediction / compensation unit 89 also calculates cost function values for all candidate inter prediction modes and reference pictures based on the current picture and the prediction image read from the screen rearrangement buffer 72. .
- the motion prediction / compensation unit 89 determines the inter prediction mode with the smallest cost function value as the optimal inter prediction mode, and determines the reference picture candidate as the reference picture. Then, the motion prediction / compensation unit 89 supplies the minimum value of the cost function value and the corresponding prediction image to the prediction image selection unit 92.
- the motion prediction / compensation unit 89 When the prediction of the prediction image generated in the optimal inter prediction mode is notified from the prediction image selection unit 92, the motion prediction / compensation unit 89 generates motion vector information representing a motion vector corresponding to the prediction image. Then, the motion prediction / compensation unit 89 supplies the inter prediction mode information, the motion vector information, and the reference picture identification information to the lossless encoding unit 76.
- the predicted image selection unit 92 selects one of the optimal intra prediction mode and the optimal inter prediction mode with a smaller corresponding cost function value. To be the optimal prediction mode. Then, the prediction image selection unit 92 supplies the prediction image of the optimal prediction mode to the calculation unit 73 and the addition unit 81. Also, the predicted image selection unit 92 notifies the intra prediction unit 87 or the motion prediction / compensation unit 89 of the selection of the predicted image in the optimal prediction mode.
- the rate control unit 93 controls the rate of the quantization operation of the quantization unit 75 based on the encoded data accumulated in the accumulation buffer 77 so that an overflow or an underflow does not occur.
- FIG. 8 is a flowchart for explaining stream file generation processing of the file generation unit 50 of FIG.
- step S11 of FIG. 8 the setting unit 51 of the file generation unit 50 sets a parameter set including the SEI of the leading picture in which the GOP structure map is stored.
- the setting unit 51 supplies the set parameter set to the video encoding unit 52 and the NALization unit 53.
- step S12 the video encoding unit 52 performs encoding processing of encoding each picture of video data input from the outside in CU units according to the HEVC method. Details of this encoding process will be described with reference to FIGS. 9 and 10 described later.
- step S13 the NAL unit 53 NALs the parameter set supplied from the setting unit 51 and the encoded data supplied from the video encoding unit 52 to generate a NAL unit.
- the NAL unit 53 supplies the generated NAL unit to the multiplexing unit 54.
- step S14 the multiplexing unit 54 combines the NAL units supplied from the NAL processing unit 53 in units of pictures to generate AUs, and generates a video stream composed of one or more AUs.
- the multiplexing unit 54 supplies the video stream to the filing unit 56.
- step S15 the audio encoding unit 55 encodes the input audio data, and supplies the audio stream obtained as a result to the filing unit 56.
- step S16 the filing unit 56 multiplexes the video stream supplied from the multiplexing unit 54 and the audio stream supplied from the audio encoding unit 55 to generate an AV stream.
- step S17 the file conversion unit 56 converts the AV stream into a file to generate and output a stream file. This stream file is recorded on the optical disc 11.
- FIG. 9 and FIG. 10 are flowcharts for explaining the details of the encoding process of step S12 of FIG.
- step S31 of FIG. 9 the A / D conversion unit 71 (FIG. 7) of the video encoding unit 52 A / D converts the input analog signal of each picture, and displays the digital signal of each picture after conversion It is output to the rearrangement buffer 72 and stored.
- step S32 the screen rearrangement buffer 72 rearranges the pictures in the stored display order into the order for encoding in accordance with the GOP structure.
- the screen rearrangement buffer 72 outputs the rearranged picture to the arithmetic unit 73, the intra prediction unit 87, and the motion prediction / compensation unit 89 as a current picture.
- step S33 the intra prediction unit 87 performs, for the current block, intra prediction processing in all candidate intra prediction modes using the neighboring pixels read from the frame memory 85 via the switch 86. Further, the intra prediction unit 87 calculates cost function values for all candidate intra prediction modes based on the current picture from the screen rearrangement buffer 72 and the predicted image generated as a result of the intra prediction process. Do. Then, the intra prediction unit 87 determines the intra prediction mode with the smallest cost function value as the optimal intra prediction mode. The intra prediction unit 87 supplies the predicted image generated in the optimal intra prediction mode and the corresponding cost function value to the predicted image selection unit 92.
- the motion prediction / compensation unit 89 performs motion prediction / compensation processing on all candidate inter prediction modes for the current block using the reference picture candidate. Also, the motion prediction / compensation unit 89 generates all candidate inter prediction modes and reference pictures based on the current picture from the screen rearrangement buffer 72 and the predicted image generated as a result of the motion prediction / compensation processing. Calculate the cost function value. The motion prediction / compensation unit 89 determines the inter prediction mode with the smallest cost function value as the optimal inter prediction mode, and determines the reference picture candidate as the reference picture. Then, the motion prediction / compensation unit 89 supplies the minimum value of the cost function value and the corresponding prediction image to the prediction image selection unit 92.
- step S34 the predicted image selection unit 92 determines that the cost function value among the optimal intra prediction mode and the optimal inter prediction mode is minimum, based on the cost function values supplied from the intra prediction unit 87 and the motion prediction / compensation unit 89. The one that becomes is determined to be the optimal prediction mode. Then, the prediction image selection unit 92 supplies the prediction image of the optimal prediction mode to the calculation unit 73 and the addition unit 81.
- step S35 the predicted image selection unit 92 determines whether the optimal prediction mode is the optimal inter prediction mode. If it is determined in step S35 that the optimal prediction mode is the optimal inter prediction mode, the prediction image selection unit 92 notifies the motion prediction / compensation unit 89 of selection of a prediction image generated in the optimal inter prediction mode.
- the motion prediction / compensation unit 89 generates motion vector information representing the motion vector of the current block corresponding to the predicted image in response to the notification. Then, in step S36, the motion prediction / compensation unit 89 supplies the inter prediction mode information, the motion vector information, and the reference picture identification information to the lossless encoding unit 76, and the process proceeds to step S38.
- step S35 if it is determined in step S35 that the optimal prediction mode is not the optimal inter prediction mode, that is, if the optimal prediction mode is the optimal intra prediction mode, the predicted image selection unit 92 generates the prediction generated in the optimal intra prediction mode.
- the intra prediction unit 87 is notified of image selection. Then, in step S37, the intra prediction unit 87 supplies the intra prediction mode information to the lossless encoding unit 76, and advances the process to step S38.
- step S38 the computing unit 73 performs encoding by subtracting the predicted image supplied from the predicted image selecting unit 92 from the current picture supplied from the screen rearrangement buffer 72.
- the computing unit 73 outputs the resulting picture to the orthogonal transform unit 74 as residual information.
- step S39 the orthogonal transformation unit 74 performs orthogonal transformation on the residual information from the computation unit 73 on a TU basis, and supplies the orthogonal transformation coefficient obtained as a result to the quantization unit 75.
- step S40 the quantization unit 75 quantizes the orthogonal transformation coefficient supplied from the orthogonal transformation unit 74, and supplies the quantized orthogonal transformation coefficient to the lossless encoding unit 76 and the inverse quantization unit 79.
- step S41 of FIG. 10 the inverse quantization unit 79 inversely quantizes the quantized orthogonal transformation coefficient supplied from the quantization unit 75, and supplies the resultant orthogonal transformation coefficient to the inverse orthogonal transformation unit 80. .
- step S42 the inverse orthogonal transformation unit 80 performs inverse orthogonal transformation on the orthogonal transformation coefficient supplied from the inverse quantization unit 79 on a TU basis, and supplies the resultant residual information to the addition unit 81.
- step S43 the addition unit 81 adds the residual information supplied from the inverse orthogonal transform unit 80 and the predicted image supplied from the predicted image selection unit 92, and locally decodes the current picture.
- the addition unit 81 supplies the locally decoded current picture to the frame memory 85. Also, the adding unit 81 supplies the current picture, in which all the regions have been decoded, to the filter 82 as a coded picture.
- step S44 the filter 82 performs deblocking filter processing on the encoded picture supplied from the adding unit 81.
- step S45 the filter 82 performs adaptive offset filter processing for each LCU (Largest Coding Unit) on the coded picture after the deblock filter processing.
- the filter 82 supplies the resulting encoded picture to the frame memory 85. Also, the filter 82 supplies offset filter information to the lossless encoding unit 76 for each LCU.
- step S46 the frame memory 85 stores the current picture supplied from the adding unit 81 and the encoded picture supplied from the filter 82. Pixels adjacent to the current block in the current picture are supplied to the intra prediction unit 87 via the switch 86 as peripheral pixels. Also, the coded picture is output to the motion prediction / compensation unit 89 via the switch 86 as a reference picture candidate.
- step S47 the lossless encoding unit 76 losslessly encodes intra prediction mode information, or inter prediction mode information, motion vector information, reference picture identification information, and offset filter information as encoding information.
- step S48 the lossless encoding unit 76 losslessly encodes the quantized orthogonal transformation coefficient supplied from the quantization unit 75. Then, the lossless encoding unit 76 arranges the encoding information losslessly encoded in the process of step S47 in a slice header in slice units, and adds it to the losslessly encoded slice unit orthogonal transform coefficients, Generate unit encoded data. The lossless encoding unit 76 supplies encoded data in slice units to the accumulation buffer 77.
- step S49 the accumulation buffer 77 temporarily accumulates the slice unit encoded data supplied from the lossless encoding unit 76.
- step S50 the rate control unit 93 controls the rate of the quantization operation of the quantization unit 75 based on the encoded data accumulated in the accumulation buffer 77 so that an overflow or an underflow does not occur.
- step S51 the accumulation buffer 77 outputs the stored encoded data in slice units to the NALization unit 53 in FIG. Then, the process returns to step S12 in FIG. 8 and proceeds to step S13.
- the intra prediction process and the motion prediction / compensation process are always performed. In some cases only
- the file generation unit 50 sets SEI including reference layer information. Therefore, the playback device 2 can easily perform trick play based on reference layer information included in the SEI without parsing encoded data other than encoded data of a picture whose reference layer information is less than or equal to a threshold value. it can.
- the playback device 2 can easily control the trick play granularity based on the reference layer information included in the SEI by changing the threshold. Therefore, it can be said that the file generation unit 50 can set information that allows easy control of the trick play granularity.
- FIG. 11 is a block diagram showing an example of the configuration of a fast-forwarding reproduction unit for fast-forwarding reproducing the video stream of the stream file recorded on the optical disk 11 in the reproducing apparatus 2 of FIG.
- the fast-forwarding reproduction unit 110 in FIG. 11 includes a reading unit 111, a separation unit 112, an extraction unit 113, a selection unit 114, and a decoding unit 115.
- the reading unit 111 of the fast-forwarding reproduction unit 110 reads the AU of the leading picture of the AV stream stored as a stream file on the optical disc 11. Further, the reading unit 111 reads an AU of the picture represented by the selected picture information supplied from the selecting unit 114 out of the AV stream stored as a stream file on the optical disc 11. The reading unit 111 supplies the read AU to the separating unit 112.
- the separating unit 112 receives the AU supplied from the reading unit 111.
- the separation unit 112 separates each NAL unit constituting the AU, and supplies the separated NAL units to the extraction unit 113.
- the extraction unit 113 extracts the parameter set and the encoded data in slice units from the NAL unit supplied from the separation unit 112, and supplies the extracted data to the decoding unit 115. Also, the extraction unit 113 supplies the GOP structure map stored in the SEI of the leading picture in the parameter set to the selection unit 114.
- the selection unit 114 selects a picture other than the leading picture to be subjected to fast-forward playback. Specifically, the selection unit 114 selects a picture other than the first picture whose reference layer information is equal to or less than the threshold value, based on the reference layer information of each picture constituting the GOP described in the GOP structure map. This threshold is determined based on, for example, the granularity of fast-forward playback designated by the user. The selection unit 114 supplies the selected picture information representing the selected picture to the reading unit 111.
- the decoding unit 115 decodes, in CU units, the encoded data in slice units supplied from the extraction unit 113 in accordance with the HEVC scheme. At this time, the decoding unit 115 also refers to the parameter set supplied from the extraction unit 113 as necessary. The decoding unit 115 outputs the picture obtained as a result of the decoding to the display device 3 of FIG.
- FIG. 12 is a block diagram showing a configuration example of the decoding unit 115 of FIG.
- the decoding unit 115 in FIG. 12 includes an accumulation buffer 131, a lossless decoding unit 132, an inverse quantization unit 133, an inverse orthogonal transformation unit 134, an addition unit 135, a filter 136, and a screen rearrangement buffer 139.
- the decoding unit 115 further includes a D / A conversion unit 140, a frame memory 141, a switch 142, an intra prediction unit 143, a motion compensation unit 147, and a switch 148.
- the accumulation buffer 131 of the decoding unit 115 receives encoded data in slice units from the extraction unit 113 of FIG.
- the accumulation buffer 131 supplies the encoded data in units of pictures that have been accumulated to the lossless decoding unit 132 as encoded data of the current picture.
- the lossless decoding unit 132 applies a lossless decoding such as variable-length decoding or arithmetic decoding to the encoded data from the accumulation buffer 131, corresponding to the lossless encoding of the lossless encoding unit 76 in FIG. Obtain the transformed orthogonal transform coefficients and the coding information.
- the lossless decoding unit 132 supplies the quantized orthogonal transformation coefficient to the inverse quantization unit 133. Also, the lossless decoding unit 132 supplies intra prediction unit information or the like as coding information to the intra prediction unit 143.
- the lossless decoding unit 132 supplies reference picture identification information, motion vector information, and inter prediction mode information to the motion compensation unit 147.
- the lossless decoding unit 132 supplies intra prediction mode information or inter prediction mode information as coding information to the switch 148.
- the lossless decoding unit 132 supplies offset filter information as encoding information to the filter 136.
- the inverse quantization unit 133, the inverse orthogonal transformation unit 134, the addition unit 135, the filter 136, the frame memory 141, the switch 142, the intra prediction unit 143, and the motion compensation unit 147 are the inverse quantization unit 79 in FIG.
- the same processing as that of the adding unit 81, the filter 82, the frame memory 85, the switch 86, the intra prediction unit 87, and the motion prediction / compensation unit 89 is performed to decode a picture in CU units.
- the inverse quantization unit 133 inversely quantizes the quantized orthogonal transformation coefficient from the lossless decoding unit 132, and supplies the orthogonal transformation coefficient obtained as a result to the inverse orthogonal transformation unit 134.
- the inverse orthogonal transformation unit 134 performs inverse orthogonal transformation on the orthogonal transformation coefficient from the inverse quantization unit 133 on a TU basis.
- the inverse orthogonal transformation unit 134 supplies the residual information obtained as a result of the inverse orthogonal transformation to the addition unit 135.
- the addition unit 135 locally decodes the current picture in CU units by adding the residual information supplied from the inverse orthogonal transform unit 134 and the predicted image supplied from the switch 148. When the predicted image is not supplied from the switch 148, the addition unit 135 sets the residual information supplied from the inverse orthogonal transform unit 134 as the decoding result. The addition unit 135 supplies the locally decoded current picture obtained as a result of the decoding to the frame memory 141. Also, the adding unit 135 supplies the current picture, in which all the regions have been decoded, to the filter 136 as a decoded picture.
- the filter 136 filters the decoded picture supplied from the adding unit 135. Specifically, the filter 136 first performs deblocking filter processing on the decoded picture. Next, the filter 136 uses, for each LCU, the offset represented by the offset filter information from the lossless decoding unit 132 to perform adaptive offset of the type represented by the offset filter information with respect to the decoded picture after the deblocking filter processing. Perform filter processing. The filter 136 supplies the decoded picture after adaptive offset filtering to the frame memory 141 and the screen rearrangement buffer 139.
- the screen rearrangement buffer 139 stores the decoded picture supplied from the filter 136.
- the screen rearrangement buffer 139 rearranges the stored decoded pictures in the order for encoding into the original display order, and supplies the rearranged pictures to the D / A conversion unit 140.
- the D / A conversion unit 140 D / A converts the decoded picture in units of frames supplied from the screen rearrangement buffer 139, and outputs the converted picture to the display device 3 in FIG.
- the frame memory 141 stores the current picture supplied from the adding unit 135 and the decoded picture supplied from the filter 136.
- the pixels adjacent to the current block in the current picture are supplied to the intra prediction unit 143 via the switch 142 as peripheral pixels.
- the decoded picture is output to the motion compensation unit 147 via the switch 142 as a reference picture.
- the intra prediction unit 143 uses the neighboring pixels read from the frame memory 141 via the switch 142 to select the optimal intra prediction mode indicated by the intra prediction mode information supplied from the lossless decoding unit 132. Perform intra prediction processing.
- the intra prediction unit 143 supplies the predicted image generated as a result to the switch 148.
- the motion compensation unit 147 performs a motion compensation process on the current block based on the inter prediction mode information from the lossless decoding unit 132, the reference picture identification information, and the motion vector information.
- the motion compensation unit 147 reads the reference picture specified by the reference picture specification information from the frame memory 141 via the switch 142.
- the motion compensation unit 147 performs motion compensation processing of the current block in the optimal inter prediction mode indicated by the inter prediction mode information, using the reference picture and the motion vector represented by the motion vector information.
- the motion compensation unit 147 supplies the predicted image generated as a result to the switch 148.
- the switch 148 supplies the prediction image supplied from the intra prediction unit 143 to the addition unit 135.
- the switch 148 supplies the prediction image supplied from the motion compensation unit 147 to the addition unit 135.
- FIG. 13 is a flow chart for explaining the fast forward reproduction processing of the fast forward reproduction unit 110 of FIG. This fast forward reproduction process is performed in units of GOPs.
- step S111 in FIG. 13 the reading unit 111 of the fast-forwarding reproduction unit 110 reads the AU of the first picture of the AV stream stored as a stream file on the optical disc 11 and supplies the AU to the separation unit 112.
- step S112 the separation unit 112 separates each NAL unit constituting the AU supplied from the reading unit 111, and supplies the separated NAL units to the extraction unit 113.
- step S 113 the extraction unit 113 extracts the parameter set and the encoded data in slice units from the NAL unit supplied from the separation unit 112, and supplies the extracted data to the decoding unit 115.
- step S114 the fast forward playback unit 110 determines whether the AU read by the reading unit 111 is the AU of the leading picture. If it is determined in step S114 that the read AU is the AU of the first picture, the process proceeds to step S115.
- step S115 the extraction unit 113 supplies, to the selection unit 114, the GOP structure map stored in SEI of the first picture in the parameter set extracted in step S113.
- step S116 based on the GOP structure map supplied from the extraction unit 113, the selection unit 114 selects a picture other than the leading picture to be subjected to fast-forward playback.
- the selecting unit 114 supplies selected picture information representing the selected picture to the reading unit 111, and the process proceeds to step S117.
- step S114 determines whether the read AU is the AU of the first picture, that is, if the read AU is the AU of a picture other than the first picture not including the GOP structure map.
- steps S115 and S116 are not performed. Then, the process proceeds to step S117.
- step S117 the decoding unit 115 decodes, in units of CUs, the encoded data in units of slices supplied from the extraction unit 113 using the parameter set supplied from the extraction unit 113 as necessary. Do the processing. Details of this decoding process will be described with reference to FIG. 14 described later.
- step S118 the reading unit 111 determines whether AUs of all the pictures represented by the selected picture information have been read. If it is determined in step S118 that the AUs of all the pictures represented by the selected picture information have not been read yet, the process proceeds to step S119.
- step S119 the reading unit 111 reads, out of the AUs of the picture represented by the selected picture information in the AV stream stored as a stream file on the optical disc 11, an AU that has not been read yet. Then, the process returns to step S112, and the processes of steps S112 to S119 are repeated until AUs of all the pictures represented by the selected picture information are read.
- step S118 if it is determined in step S118 that AUs of all the pictures represented by the selected picture information have been read out, the process ends.
- FIG. 14 is a flowchart for explaining the details of the decoding process of step S117 of FIG.
- step S131 of FIG. 14 the accumulation buffer 131 (FIG. 12) of the decoding unit 115 receives the encoded data in slice units from the extraction unit 113 of FIG.
- the accumulation buffer 131 supplies the encoded data in units of pictures that have been accumulated to the lossless decoding unit 132 as encoded data of the current picture.
- step S132 the lossless decoding unit 132 losslessly decodes the encoded data from the accumulation buffer 131, and obtains quantized orthogonal transformation coefficients and encoding information.
- the lossless decoding unit 132 supplies the quantized orthogonal transformation coefficient to the inverse quantization unit 133.
- the lossless decoding unit 132 supplies intra prediction unit information or the like as coding information to the intra prediction unit 143.
- the lossless decoding unit 132 supplies reference picture identification information, motion vector information, and inter prediction mode information to the motion compensation unit 147.
- the lossless decoding unit 132 supplies intra prediction mode information or inter prediction mode information as coding information to the switch 148.
- the lossless decoding unit 132 supplies offset filter information as encoding information to the filter 136.
- step S133 the inverse quantization unit 133 inversely quantizes the quantized orthogonal transformation coefficient from the lossless decoding unit 132, and supplies the resultant orthogonal transformation coefficient to the inverse orthogonal transformation unit 134.
- step S134 the inverse orthogonal transformation unit 134 performs inverse orthogonal transformation on the orthogonal transformation coefficient from the inverse quantization unit 133, and supplies the residual information obtained as a result to the addition unit 135.
- step S135 the motion compensation unit 147 determines whether the inter prediction mode information has been supplied from the lossless decoding unit 132 or not. If it is determined in step S135 that the inter prediction mode information has been supplied, the process proceeds to step S136.
- step S136 the motion compensation unit 147 performs a motion compensation process on the current block based on the inter prediction mode information from the lossless decoding unit 132, the reference picture specification information, and the motion vector information.
- the motion compensation unit 147 supplies the predicted image generated as a result thereof to the addition unit 135 via the switch 148, and the process proceeds to step S138.
- step S135 when it is determined in step S135 that the inter prediction mode information is not supplied, that is, when the intra prediction mode information is supplied to the intra prediction unit 143, the process proceeds to step S137.
- step S137 the intra prediction unit 143 performs, for the current block, intra prediction processing in the optimal intra prediction mode indicated by the intra prediction mode information, using neighboring pixels read from the frame memory 141 via the switch 142. Do.
- the intra prediction unit 143 supplies the predicted image generated as a result of the intra prediction process to the addition unit 135 via the switch 148, and the process proceeds to step S138.
- step S138 the addition unit 135 locally decodes the current picture by adding the residual information supplied from the inverse orthogonal transform unit 134 and the predicted image supplied from the switch 148.
- the addition unit 135 supplies the locally decoded current picture obtained as a result of the decoding to the frame memory 141. Also, the adding unit 135 supplies the current picture, in which all the regions have been decoded, to the filter 136 as a decoded picture.
- step S139 the filter 136 deblocks the decoded picture supplied from the adding unit 135 to remove block distortion.
- step S140 the filter 136 performs adaptive offset filter processing for each LCU on the decoded picture after the deblocking filter processing based on the offset filter information supplied from the lossless decoding unit 132.
- the filter 136 supplies the image after the adaptive offset filter processing to the screen rearrangement buffer 139 and the frame memory 141.
- step S 141 the frame memory 141 stores the current picture supplied from the adding unit 81 and the decoded picture supplied from the filter 136.
- the pixels adjacent to the current block in the current picture are supplied to the intra prediction unit 143 via the switch 142 as peripheral pixels.
- the decoded picture is output to the motion compensation unit 147 via the switch 142 as a reference picture.
- step S142 the screen rearrangement buffer 139 stores the decoded picture supplied from the filter 136, rearranges the stored pictures in the order for encoding into the original display order, and performs D / A conversion. It supplies to the part 140.
- step S143 the D / A conversion unit 140 D / A converts the picture supplied from the screen rearrangement buffer 139 and outputs the picture to the display device 3 of FIG. Then, the process returns to step S117 in FIG. 13 and proceeds to step S118.
- the fast forward playback unit 110 selects a picture to be a target of fast forward playback based on the reference layer information included in the SEI. Therefore, the fast forward playback unit 110 can easily perform fast forward playback without parsing encoded data other than the encoded data of the selected picture.
- the fast-forwarding reproduction unit 110 can easily control the granularity of the fast-forwarding reproduction based on the reference layer information included in the SEI by changing the threshold of the reference layer information corresponding to the picture to be selected.
- the fast-forward playback unit 110 of the playback device 2 a portion of the playback device 2 that performs other trick play such as rewind playback on the video stream is also a picture of the display device 3. Similar to the fast forward playback unit 110 except that the output order is different.
- the rewinding playback unit that performs rewinding playback selects and decodes a picture to be a target of rewinding playback based on the reference layer information, and the display order obtained as a result of decoding Output pictures in reverse order.
- the reproduction apparatus 2 performs trick play based on the reference layer information
- the reproduction is performed at a frame rate lower than the frame rate of the video stream recorded on the optical disc 11 You may do so.
- the playback device 2 may decode the picture selected based on the reference layer information in the same time as decoding all the pictures. Therefore, the processing load of the playback device 2 is reduced, and decoding can be performed even when the processing capability of the playback device 2 is low.
- the leading picture may not be reproduced when the reference layer information of the leading picture is larger than the threshold.
- Second Embodiment (Description of a computer to which the present disclosure is applied)
- the above-described series of processes may be performed by hardware or software.
- a program that configures the software is installed on a computer.
- the computer includes, for example, a general-purpose personal computer that can execute various functions by installing a computer incorporated in dedicated hardware and various programs.
- FIG. 15 is a block diagram showing an example of a hardware configuration of a computer that executes the series of processes described above according to a program.
- a central processing unit (CPU) 201 a read only memory (ROM) 202, and a random access memory (RAM) 203 are mutually connected by a bus 204.
- CPU central processing unit
- ROM read only memory
- RAM random access memory
- an input / output interface 205 is connected to the bus 204.
- An input unit 206, an output unit 207, a storage unit 208, a communication unit 209, and a drive 210 are connected to the input / output interface 205.
- the input unit 206 includes a keyboard, a mouse, a microphone, and the like.
- the output unit 207 includes a display, a speaker, and the like.
- the storage unit 208 includes a hard disk, a non-volatile memory, and the like.
- the communication unit 209 is configured of a network interface or the like.
- the drive 210 drives removable media 211 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory.
- the CPU 201 loads the program stored in the storage unit 208 into the RAM 203 via the input / output interface 205 and the bus 204 and executes the program. A series of processing is performed.
- the program executed by the computer 200 can be provided by being recorded on, for example, a removable medium 211 as a package medium or the like. Also, the program can be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.
- the program can be installed in the storage unit 208 via the input / output interface 205 by attaching the removable media 211 to the drive 210.
- the program can be received by the communication unit 209 via a wired or wireless transmission medium and installed in the storage unit 208.
- the program can be installed in advance in the ROM 202 or the storage unit 208.
- the program executed by the computer 200 may be a program that performs processing in chronological order according to the order described in this specification, or in parallel, or when necessary, such as when a call is made. It may be a program in which processing is performed.
- reference layer information is multiplexed into coded data and transmitted from the coding side to the decoding side.
- the method of transmitting reference layer information is not limited to such an example.
- reference layer information may be transmitted or recorded as separate data associated with the coded data without being multiplexed to the coded data.
- the term “associate” allows a picture (such as a slice or a block, which may be part of a picture) included in the video stream to be linked at the time of decoding with information corresponding to the picture.
- the reference layer information may be recorded on a recording medium (or another recording area of the same recording medium) different from the encoded data.
- the reference layer information and the encoded data may be associated with each other in an arbitrary unit such as, for example, a plurality of pictures, one picture, or a part in a picture.
- a system means a set of a plurality of components (devices, modules (parts), etc.), and it does not matter whether all the components are in the same case. Therefore, a plurality of devices housed in separate housings and connected via a network, and one device housing a plurality of modules in one housing are all systems. .
- the provision of content may be performed via broadcast waves or a network.
- the present disclosure can be applied to a set top box that receives broadcast waves, a television receiver, a personal computer that transmits and receives data via a network, and the like.
- the reference layer information may be included in the information stored in the SEI other than the GOP structure map.
- the reference layer information may be stored in SEI of a picture other than the leading picture or in a parameter set other than SEI.
- the present disclosure can adopt a cloud computing configuration in which one function is shared and processed by a plurality of devices via a network.
- each step described in the above-described flowchart can be executed by one device or in a shared manner by a plurality of devices.
- the plurality of processes included in one step can be executed by being shared by a plurality of devices in addition to being executed by one device.
- the present disclosure can also be configured as follows.
- An image processing apparatus comprising: a setting unit configured to set additional information of encoded data of the picture including reference layer information representing a layer of a reference relation of the picture; (2) The image processing apparatus according to (1), wherein the additional information is configured to include reference layer information of all the pictures constituting a GOP (Group of Pictures) including the picture. (3) The image processing apparatus according to (2), wherein the additional information is configured to be additional information of encoded data of a leading picture of the GOP. (4) The image processing apparatus according to any one of (1) to (3), wherein the additional information is configured to include picture type information indicating a type of the picture.
- the coding method of the picture is HEVC (High Efficiency Video Coding), The image processing apparatus according to any one of (1) to (4), wherein the additional information is configured to be supplemental enhancement information (SEI).
- SEI Supplemental Enhancement Information
- the image processing device An image processing method including a setting step of setting additional information of encoded data of the picture including reference layer information representing a layer of reference relation of the picture.
- An image processing apparatus comprising: a selection unit which selects a picture to be reproduced based on reference layer information representing a layer of a reference relation of the picture, which is included in additional information of encoded data of the picture.
- the image processing apparatus (9) The image processing apparatus according to (8), wherein the additional information is configured to include reference layer information of all the pictures constituting a GOP (Group of Pictures) including the picture. (10) The image processing apparatus according to (9), wherein the additional information is configured to be additional information of encoded data of a first picture of the GOP. (11) The image processing apparatus according to any one of (8) to (10), wherein the additional information is configured to include picture type information indicating a type of the picture. (12) The coding method of the picture is HEVC (High Efficiency Video Coding), The image processing apparatus according to any one of (8) to (11), wherein the additional information is configured to be supplemental enhancement information (SEI).
- SEI Supplemental Enhancement Information
- the image processing apparatus according to any one of (8) to (12), further comprising: a decoding unit that decodes encoded data of the picture to be reproduced selected by the selection unit.
- the image processing device An image processing method including a selection step of selecting a picture to be reproduced based on reference layer information representing a layer of a reference relation of the picture included in additional information of encoded data of the picture.
- Computer A program for functioning as a selection unit which selects a picture to be reproduced based on reference layer information representing a layer of a reference relation of the picture included in additional information of encoded data of the picture.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
1.第1実施の形態:記録再生システム(図1乃至図14)
2.第2実施の形態:コンピュータ(図15)
(記録再生システムの一実施の形態の構成例)
図1は、本開示を適用した記録再生システムの一実施の形態の構成例を示すブロック図である。
図2は、図1の光ディスク11に記録されるファイルのディレクトリ構造の例を示す図である。
図3は、図1の記録装置1のうちの、ストリームファイルを生成するファイル生成部の構成例を示すブロック図である。
図4は、先頭ピクチャのAUの構成例を示す。
図5は、GOP structure mapのシンタックスの例を示す図である。
図6は、参照レイヤ情報を説明する図である。
図7は、図3の映像符号化部52の構成例を示すブロック図である。
図8は、図3のファイル生成部50のストリームファイル生成処理を説明するフローチャートである。
図11は、図1の再生装置2のうちの、光ディスク11に記録されているストリームファイルの映像ストリームを早送り再生する早送り再生部の構成例を示すブロック図である。
図12は、図11の復号部115の構成例を示すブロック図である。
図13は、図11の早送り再生部110の早送り再生処理を説明するフローチャートである。この早送り再生処理は、GOP単位で行われる。
(本開示を適用したコンピュータの説明)
上述した一連の処理は、ハードウエアにより実行することもできるし、ソフトウエアにより実行することもできる。一連の処理をソフトウエアにより実行する場合には、そのソフトウエアを構成するプログラムが、コンピュータにインストールされる。ここで、コンピュータには、専用のハードウエアに組み込まれているコンピュータや、各種のプログラムをインストールすることで、各種の機能を実行することが可能な、例えば汎用のパーソナルコンピュータなどが含まれる。
ピクチャの参照関係のレイヤを表す参照レイヤ情報を含む前記ピクチャの符号化データの付加情報を設定する設定部
を備える画像処理装置。
(2)
前記付加情報は、前記ピクチャを含むGOP(Group of Picture)を構成する全てのピクチャの参照レイヤ情報を含む
ように構成された
前記(1)に記載の画像処理装置。
(3)
前記付加情報は、前記GOPの先頭のピクチャの符号化データの付加情報である
ように構成された
前記(2)に記載の画像処理装置。
(4)
前記付加情報は、前記ピクチャのタイプを表すピクチャタイプ情報を含む
ように構成された
前記(1)乃至(3)のいずれかに記載の画像処理装置。
(5)
前記ピクチャの符号化方式はHEVC方式(High Efficiency Video Coding)であり、
前記付加情報は、SEI(Supplemental Enhancement Information)である
ように構成された
前記(1)乃至(4)のいずれかに記載の画像処理装置。
(6)
画像処理装置が、
ピクチャの参照関係のレイヤを表す参照レイヤ情報を含む前記ピクチャの符号化データの付加情報を設定する設定ステップ
を含む画像処理方法。
(7)
コンピュータを、
ピクチャの参照関係のレイヤを表す参照レイヤ情報を含む前記ピクチャの符号化データの付加情報を設定する設定部
として機能させるためのプログラム。
(8)
ピクチャの符号化データの付加情報に含まれる、前記ピクチャの参照関係のレイヤを表す参照レイヤ情報に基づいて、再生対象のピクチャを選択する選択部
を備える画像処理装置。
(9)
前記付加情報は、前記ピクチャを含むGOP(Group of Picture)を構成する全てのピクチャの参照レイヤ情報を含む
ように構成された
前記(8)に記載の画像処理装置。
(10)
前記付加情報は、前記GOPの先頭のピクチャの符号化データの付加情報である
ように構成された
前記(9)に記載の画像処理装置。
(11)
前記付加情報は、前記ピクチャのタイプを表すピクチャタイプ情報を含む
ように構成された
前記(8)乃至(10)のいずれかに記載の画像処理装置。
(12)
前記ピクチャの符号化方式はHEVC方式(High Efficiency Video Coding)であり、
前記付加情報は、SEI(Supplemental Enhancement Information)である
ように構成された
前記(8)乃至(11)のいずれかに記載の画像処理装置。
(13)
前記選択部により選択された前記再生対象のピクチャの符号化データを復号する復号部
をさらに備える
前記(8)乃至(12)のいずれかに記載の画像処理装置。
(14)
画像処理装置が、
ピクチャの符号化データの付加情報に含まれる、前記ピクチャの参照関係のレイヤを表す参照レイヤ情報に基づいて、再生対象のピクチャを選択する選択ステップ
を含む画像処理方法。
(15)
コンピュータを、
ピクチャの符号化データの付加情報に含まれる、前記ピクチャの参照関係のレイヤを表す参照レイヤ情報に基づいて、再生対象のピクチャを選択する選択部
として機能させるためのプログラム。
(16)
ピクチャの参照関係のレイヤを表す参照レイヤ情報を含む前記ピクチャの符号化データの付加情報と、前記符号化データとを含む符号化ストリームが記録され、情報処理装置に装着され、再生される記録媒体であって、
前記符号化ストリームを取得した情報処理装置に、前記付加情報に含まれる前記参照レイヤ情報に基づいて、再生対象のピクチャを選択させる
記録媒体。
Claims (16)
- ピクチャの参照関係のレイヤを表す参照レイヤ情報を含む前記ピクチャの符号化データの付加情報を設定する設定部
を備える画像処理装置。 - 前記付加情報は、前記ピクチャを含むGOP(Group of Picture)を構成する全てのピクチャの参照レイヤ情報を含む
ように構成された
請求項1に記載の画像処理装置。 - 前記付加情報は、前記GOPの先頭のピクチャの符号化データの付加情報である
ように構成された
請求項2に記載の画像処理装置。 - 前記付加情報は、前記ピクチャのタイプを表すピクチャタイプ情報を含む
ように構成された
請求項1に記載の画像処理装置。 - 前記ピクチャの符号化方式はHEVC方式(High Efficiency Video Coding)であり、
前記付加情報は、SEI(Supplemental Enhancement Information)である
ように構成された
請求項1に記載の画像処理装置。 - 画像処理装置が、
ピクチャの参照関係のレイヤを表す参照レイヤ情報を含む前記ピクチャの符号化データの付加情報を設定する設定ステップ
を含む画像処理方法。 - コンピュータを、
ピクチャの参照関係のレイヤを表す参照レイヤ情報を含む前記ピクチャの符号化データの付加情報を設定する設定部
として機能させるためのプログラム。 - ピクチャの符号化データの付加情報に含まれる、前記ピクチャの参照関係のレイヤを表す参照レイヤ情報に基づいて、再生対象のピクチャを選択する選択部
を備える画像処理装置。 - 前記付加情報は、前記ピクチャを含むGOP(Group of Picture)を構成する全てのピクチャの参照レイヤ情報を含む
ように構成された
請求項8に記載の画像処理装置。 - 前記付加情報は、前記GOPの先頭のピクチャの符号化データの付加情報である
ように構成された
請求項9に記載の画像処理装置。 - 前記付加情報は、前記ピクチャのタイプを表すピクチャタイプ情報を含む
ように構成された
請求項8に記載の画像処理装置。 - 前記ピクチャの符号化方式はHEVC方式(High Efficiency Video Coding)であり、
前記付加情報は、SEI(Supplemental Enhancement Information)である
ように構成された
請求項8に記載の画像処理装置。 - 前記選択部により選択された前記再生対象のピクチャの符号化データを復号する復号部
をさらに備える
請求項8に記載の画像処理装置。 - 画像処理装置が、
ピクチャの符号化データの付加情報に含まれる、前記ピクチャの参照関係のレイヤを表す参照レイヤ情報に基づいて、再生対象のピクチャを選択する選択ステップ
を含む画像処理方法。 - コンピュータを、
ピクチャの符号化データの付加情報に含まれる、前記ピクチャの参照関係のレイヤを表す参照レイヤ情報に基づいて、再生対象のピクチャを選択する選択部
として機能させるためのプログラム。 - ピクチャの参照関係のレイヤを表す参照レイヤ情報を含む前記ピクチャの符号化データの付加情報と、前記符号化データとを含む符号化ストリームが記録され、情報処理装置に装着され、再生される記録媒体であって、
前記符号化ストリームを取得した情報処理装置に、前記付加情報に含まれる前記参照レイヤ情報に基づいて、再生対象のピクチャを選択させる
記録媒体。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/108,818 US10390047B2 (en) | 2015-01-09 | 2015-12-25 | Image processing apparatus and image processing method for controlling the granularity in trick play |
EP15874421.9A EP3244615A4 (en) | 2015-01-09 | 2015-12-25 | Image processing device, image processing method, and program, and recording medium |
CN201580003681.XA CN106105221B (zh) | 2015-01-09 | 2015-12-25 | 图像处理设备、图像处理方法以及记录介质 |
JP2016536970A JP6690536B2 (ja) | 2015-01-09 | 2015-12-25 | 画像処理装置、画像処理方法、およびプログラム |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015-002890 | 2015-01-09 | ||
JP2015002890 | 2015-01-09 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016111199A1 true WO2016111199A1 (ja) | 2016-07-14 |
Family
ID=56358129
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2015/086202 WO2016111199A1 (ja) | 2015-01-09 | 2015-12-25 | 画像処理装置、画像処理方法、およびプログラム、並びに記録媒体 |
Country Status (5)
Country | Link |
---|---|
US (1) | US10390047B2 (ja) |
EP (1) | EP3244615A4 (ja) |
JP (1) | JP6690536B2 (ja) |
CN (1) | CN106105221B (ja) |
WO (1) | WO2016111199A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7532362B2 (ja) | 2019-06-20 | 2024-08-13 | ソニーセミコンダクタソリューションズ株式会社 | 画像処理装置および方法 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2006003814A1 (ja) * | 2004-07-01 | 2006-01-12 | Mitsubishi Denki Kabushiki Kaisha | ランダムアクセス可能な映像情報記録媒体、及び記録方法、及び再生装置及び再生方法 |
WO2012096806A1 (en) * | 2011-01-14 | 2012-07-19 | Vidyo, Inc. | High layer syntax for temporal scalability |
JP2013158003A (ja) | 2010-02-12 | 2013-08-15 | Sony Corp | 再生装置、記録媒体、および情報処理方法 |
Family Cites Families (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2005101827A1 (ja) * | 2004-04-16 | 2005-10-27 | Matsushita Electric Industrial Co., Ltd. | 記録媒体、再生装置、プログラム |
WO2005106875A1 (en) * | 2004-04-28 | 2005-11-10 | Matsushita Electric Industrial Co., Ltd. | Moving picture stream generation apparatus, moving picture coding apparatus, moving picture multiplexing apparatus and moving picture decoding apparatus |
EP1827023A1 (en) | 2006-02-27 | 2007-08-29 | THOMSON Licensing | Method and apparatus for packet loss detection and virtual packet generation at SVC decoders |
US10136118B2 (en) * | 2008-07-20 | 2018-11-20 | Dolby Laboratories Licensing Corporation | Compatible stereoscopic video delivery |
JP4957823B2 (ja) * | 2009-04-08 | 2012-06-20 | ソニー株式会社 | 再生装置および再生方法 |
KR20120015443A (ko) * | 2009-04-13 | 2012-02-21 | 리얼디 인크. | 향상된 해상도의 스테레오스코픽 비디오의 엔코딩, 디코딩 및 배포 |
WO2011013030A1 (en) * | 2009-07-27 | 2011-02-03 | Koninklijke Philips Electronics N.V. | Combining 3d video and auxiliary data |
US20110211634A1 (en) * | 2010-02-22 | 2011-09-01 | Richard Edwin Goedeken | Method and apparatus for offset metadata insertion in multi-view coded view |
US8724710B2 (en) * | 2010-02-24 | 2014-05-13 | Thomson Licensing | Method and apparatus for video encoding with hypothetical reference decoder compliant bit allocation |
US9131033B2 (en) * | 2010-07-20 | 2015-09-08 | Qualcomm Incoporated | Providing sequence data sets for streaming video data |
BR112013004460A2 (pt) * | 2010-09-03 | 2016-06-07 | Sony Corp | dispositivo e método de processamento de imagem |
EP2659676A4 (en) * | 2010-12-27 | 2018-01-03 | Telefonaktiebolaget LM Ericsson (publ) | Method and arrangement for processing of encoded video |
US9451252B2 (en) * | 2012-01-14 | 2016-09-20 | Qualcomm Incorporated | Coding parameter sets and NAL unit headers for video coding |
US20130308926A1 (en) * | 2012-05-17 | 2013-11-21 | Gangneung-Wonju National University Industry Academy Cooperation Group | Recording medium, reproducing device for performing trick play for data of the recording medium, and method thereof |
US20140078249A1 (en) * | 2012-09-20 | 2014-03-20 | Qualcomm Incorporated | Indication of frame-packed stereoscopic 3d video data for video coding |
WO2014051409A1 (ko) * | 2012-09-28 | 2014-04-03 | 삼성전자 주식회사 | 참조 픽처 정보를 이용한 병렬 처리 비디오 부호화 방법 및 장치, 병렬 처리 비디오 복호화 방법 및 장치 |
US9473779B2 (en) * | 2013-03-05 | 2016-10-18 | Qualcomm Incorporated | Parallel processing for video coding |
US10194182B2 (en) * | 2014-02-03 | 2019-01-29 | Lg Electronics Inc. | Signal transmission and reception apparatus and signal transmission and reception method for providing trick play service |
-
2015
- 2015-12-25 WO PCT/JP2015/086202 patent/WO2016111199A1/ja active Application Filing
- 2015-12-25 EP EP15874421.9A patent/EP3244615A4/en not_active Ceased
- 2015-12-25 JP JP2016536970A patent/JP6690536B2/ja not_active Expired - Fee Related
- 2015-12-25 CN CN201580003681.XA patent/CN106105221B/zh not_active Expired - Fee Related
- 2015-12-25 US US15/108,818 patent/US10390047B2/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2006003814A1 (ja) * | 2004-07-01 | 2006-01-12 | Mitsubishi Denki Kabushiki Kaisha | ランダムアクセス可能な映像情報記録媒体、及び記録方法、及び再生装置及び再生方法 |
JP2013158003A (ja) | 2010-02-12 | 2013-08-15 | Sony Corp | 再生装置、記録媒体、および情報処理方法 |
WO2012096806A1 (en) * | 2011-01-14 | 2012-07-19 | Vidyo, Inc. | High layer syntax for temporal scalability |
Non-Patent Citations (4)
Title |
---|
JILI BOYCE ET AL.: "Extensible High Layer Syntax for Scalability", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG 16 WP3 AND ISO/IEC JTC1/SC29/WG11, JCTVC-E279_R3, 5TH MEETING, March 2011 (2011-03-01), Geneva, CH, pages 1 - 10, XP030008785 * |
JILL BOYCE ET AL.: "High layer syntax to improve support for temporal scalability", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG 16 WP3 AND ISO/IEC JTC1/SC29/WG11, JCTVC-D200, 4TH MEETING, January 2011 (2011-01-01), Daegu, KR, pages 1 - 14, XP030008240 * |
MISKA M. HANNUKSELA ET AL.: "Indication of the temporal structure of coded video sequences", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG 16 WP3 AND ISO/IEC JTC1/SC29/WG11, JCTVC-H0423R3, 8TH MEETING, February 2012 (2012-02-01), San Jose, CA , USA, pages 1 - 5, XP030051824 * |
See also references of EP3244615A4 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7532362B2 (ja) | 2019-06-20 | 2024-08-13 | ソニーセミコンダクタソリューションズ株式会社 | 画像処理装置および方法 |
Also Published As
Publication number | Publication date |
---|---|
EP3244615A4 (en) | 2018-06-20 |
CN106105221A (zh) | 2016-11-09 |
CN106105221B (zh) | 2021-05-04 |
US10390047B2 (en) | 2019-08-20 |
JP6690536B2 (ja) | 2020-04-28 |
EP3244615A1 (en) | 2017-11-15 |
JPWO2016111199A1 (ja) | 2017-10-19 |
US20170302964A1 (en) | 2017-10-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6585223B2 (ja) | 画像復号装置 | |
CN107018424B (zh) | 图像处理装置和图像处理方法 | |
US10764574B2 (en) | Encoding method, decoding method, encoding apparatus, decoding apparatus, and encoding and decoding apparatus | |
JP6465863B2 (ja) | 画像復号装置、画像復号方法及び記録媒体 | |
EP2476256B1 (en) | Video editing and reformating for digital video recorder | |
JP7354260B2 (ja) | Mrl基盤のイントラ予測を実行する映像コーディング方法及び装置 | |
RU2690439C2 (ru) | Устройство и способ кодирования изображений и устройство и способ декодирования изображений | |
US10027988B2 (en) | Image processing device and method | |
EP4117291A1 (en) | Mixed nal unit type based-video encoding/decoding method and apparatus, and method for transmitting bitstream | |
JP2015195543A (ja) | 画像復号装置、画像符号化装置 | |
EP4117290A1 (en) | Mixed nal unit type-based image encoding/decoding method and device, and method for transmitting bitstream | |
US9648336B2 (en) | Encoding apparatus and method | |
TWI789661B (zh) | 用於虛擬參考解碼器和輸出層集的視頻資料流、視頻編碼器、裝置及方法 | |
US20240267549A1 (en) | High level syntax signaling method and device for image/video coding | |
JP6690536B2 (ja) | 画像処理装置、画像処理方法、およびプログラム | |
RU2806784C1 (ru) | Способ и устройство кодирования/декодирования изображений на основе смешанного типа nal-единицы и способ для передачи потока битов | |
CN114747215B (zh) | 调色板编码或变换单元的基于量化参数信息的图像或视频编码 | |
RU2812029C2 (ru) | Способ и устройство кодирования/декодирования изображений на основе смешанного типа nal-единицы и способ для передачи потока битов | |
KR20230017236A (ko) | 픽처 출력 관련 정보 기반 영상 또는 비디오 코딩 | |
JPWO2015098713A1 (ja) | 画像復号装置および画像符号化装置 | |
KR20220070533A (ko) | 인코더, 디코더 및 대응하는 방법 | |
JP2015035785A (ja) | 動画像符号化装置、撮像装置、動画像符号化方法、プログラム、及び記録媒体 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2016536970 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15108818 Country of ref document: US |
|
REEP | Request for entry into the european phase |
Ref document number: 2015874421 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2015874421 Country of ref document: EP |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15874421 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |