EP4128808A1 - An apparatus, a method and a computer program for video coding and decoding - Google Patents
An apparatus, a method and a computer program for video coding and decodingInfo
- Publication number
- EP4128808A1 EP4128808A1 EP21779094.8A EP21779094A EP4128808A1 EP 4128808 A1 EP4128808 A1 EP 4128808A1 EP 21779094 A EP21779094 A EP 21779094A EP 4128808 A1 EP4128808 A1 EP 4128808A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- viewpoint
- representation
- viewpoint representation
- switch
- encoded
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000004590 computer program Methods 0.000 title claims description 17
- 238000009877 rendering Methods 0.000 claims description 24
- 230000004044 response Effects 0.000 claims description 20
- 230000009471 action Effects 0.000 claims description 18
- 230000011664 signaling Effects 0.000 claims description 16
- 230000003111 delayed effect Effects 0.000 claims description 8
- 238000003860 storage Methods 0.000 description 23
- 230000008569 process Effects 0.000 description 20
- 238000004891 communication Methods 0.000 description 19
- 230000001276 controlling effect Effects 0.000 description 19
- 230000002123 temporal effect Effects 0.000 description 19
- 238000012856 packing Methods 0.000 description 17
- 230000006978 adaptation Effects 0.000 description 16
- 239000010410 layer Substances 0.000 description 14
- 230000000875 corresponding effect Effects 0.000 description 13
- 230000033001 locomotion Effects 0.000 description 13
- 230000007704 transition Effects 0.000 description 11
- 238000012545 processing Methods 0.000 description 10
- 230000000694 effects Effects 0.000 description 9
- 238000013461 design Methods 0.000 description 8
- 210000003128 head Anatomy 0.000 description 8
- 230000005540 biological transmission Effects 0.000 description 7
- 230000003044 adaptive effect Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 6
- 238000013507 mapping Methods 0.000 description 6
- 230000007246 mechanism Effects 0.000 description 6
- 239000004065 semiconductor Substances 0.000 description 6
- 239000013598 vector Substances 0.000 description 6
- 230000006399 behavior Effects 0.000 description 5
- 230000000153 supplemental effect Effects 0.000 description 5
- 238000012546 transfer Methods 0.000 description 5
- 230000004913 activation Effects 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000003993 interaction Effects 0.000 description 4
- 238000013139 quantization Methods 0.000 description 4
- 238000011084 recovery Methods 0.000 description 4
- 238000005538 encapsulation Methods 0.000 description 3
- 238000001914 filtration Methods 0.000 description 3
- 239000012092 media component Substances 0.000 description 3
- 230000003068 static effect Effects 0.000 description 3
- 230000015556 catabolic process Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 239000012634 fragment Substances 0.000 description 2
- 230000004886 head movement Effects 0.000 description 2
- 239000011229 interlayer Substances 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- AWSBQWZZLBPUQH-UHFFFAOYSA-N mdat Chemical compound C1=C2CC(N)CCC2=CC2=C1OCO2 AWSBQWZZLBPUQH-UHFFFAOYSA-N 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 241000282836 Camelus dromedarius Species 0.000 description 1
- 230000003213 activating effect Effects 0.000 description 1
- 238000013475 authorization Methods 0.000 description 1
- 230000003139 buffering effect Effects 0.000 description 1
- 230000010267 cellular communication Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 239000004020 conductor Substances 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000013479 data entry Methods 0.000 description 1
- 238000013499 data model Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 239000000446 fuel Substances 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 230000007480 spreading Effects 0.000 description 1
- 238000003892 spreading Methods 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
- 210000003813 thumb Anatomy 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/816—Monomedia components thereof involving special video data, e.g 3D video
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/111—Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
- H04N13/117—Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/161—Encoding, multiplexing or demultiplexing different image signal components
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/172—Processing image signals image signals comprising non-image signal components, e.g. headers or format information
- H04N13/178—Metadata, e.g. disparity information
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/194—Transmission of image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/332—Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
- H04N13/344—Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/366—Image reproducers using viewer tracking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/21805—Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/4728—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for selecting a Region Of Interest [ROI], e.g. for requesting a higher resolution version of a selected region
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/65—Transmission of management data between client and server
- H04N21/658—Transmission by the client directed to the server
- H04N21/6587—Control parameters, e.g. trick play commands, viewpoint selection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/84—Generation or processing of descriptive data, e.g. content descriptors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/239—Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/243—Image signal generators using stereoscopic image cameras using three or more 2D image sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/366—Image reproducers using viewer tracking
- H04N13/383—Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
Definitions
- the present invention relates to an apparatus, a method and a computer program for video coding and decoding.
- the bitrate is aimed to be reduced e.g. such that the primary viewport (i.e., the current viewing orientation) is transmitted at the best quality/resolution, while the remaining of 360-degree video is transmitted at a lower quality/resolution.
- the viewing orientation changes, e.g. when the user turns his/her head when viewing the content with a head-mounted display (HMD)
- HMD head-mounted display
- another version of the content needs to be streamed, matching the new viewing orientation. This typically involves a viewpoint switch from a first viewpoint to a second viewpoint.
- Viewpoints may, for example, represent different viewing position to the same scene, or provide completely different scenes, e.g. in a virtual tourist tour.
- a method comprises encoding omnidirectional video media content into at least a first viewpoint representation and a second viewpoint representation, said first and second viewpoint representation being associated with mutually related viewpoints; and encoding metadata, in or along a bitstream comprising at least the encoded first viewpoint representation, said metadata comprising an indication for controlling a switch at least from the first viewpoint representation to the second viewpoint representation upon playback.
- An apparatus comprises means for encoding omnidirectional video media content into at least a first viewpoint representation and a second viewpoint representation, said first and second viewpoint representation being associated with mutually related viewpoints; and means for encoding metadata, in or along a bitstream comprising at least the encoded first viewpoint representation, said metadata comprising an indication for controlling a switch at least from the first viewpoint representation to the second viewpoint representation upon playback.
- An apparatus comprises at least one processor and at least one memory, said at least one memory stored with computer program code thereon, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform: encode omnidirectional video media content into at least a first viewpoint representation and a second viewpoint representation, said first and second viewpoint representation being associated with mutually related viewpoints; and encode metadata, in or along a bitstream comprising at least the encoded first viewpoint representation, said metadata comprising an indication for controlling a switch at least from the first viewpoint representation to the second viewpoint representation upon playback.
- said indication comprises at least one parameter indicating at least one of the following: - in response to a playback action triggering a viewpoint switch, the switch from the first viewpoint representation to the second viewpoint representation is to be started at a next available switch point;
- the switch from the first viewpoint representation to the second viewpoint representation is to be delayed until the second viewpoint representation has been decoded ready for rendering.
- said indication is configured to be encoded as a flag indicating whether the switch from the first viewpoint representation to the second viewpoint representation is to be started at the next available switch point or to be delayed until the second viewpoint representation has been decoded ready for rendering.
- a signalling of said indication is configured to be carried out by at least one syntax element included in a ViewpointTimelineSwitchStruct syntax structure for ISO/IEC 23090-2.
- a signalling of said indication is configured to be carried out by at least one syntax element included in a ViewpointSwitchingListStruct syntax structure for ISO/IEC 23090-2.
- the apparatus further comprises means for encoding, responsive to said indication indicating that the switch from the first viewpoint representation to the second viewpoint representation is to be started at the next available switch point, a second parameter for indicating a timeout value for the switch from the first viewpoint representation to the second viewpoint representation to be completed.
- a method comprises receiving at least one bitstream corresponding to at least a first encoded viewpoint representation from mutually related viewpoints comprising said first encoded viewpoint representation and at least a second encoded viewpoint representation of omnidirectional video media content; decoding and rendering said first encoded viewpoint representation for playback; receiving, from or along the at least one bitstream, metadata comprising an indication for controlling a switch at least from the first viewpoint representation to the second viewpoint representation; and switching, in response to a playback action triggering a viewpoint switch, to decode and render said first encoded viewpoint representation for playback according to said indication.
- An apparatus comprises means for receiving at least one bitstream corresponding to at least a first encoded viewpoint representation from mutually related viewpoints comprising said first encoded viewpoint representation and at least a second encoded viewpoint representation of omnidirectional video media content; means for decoding and means for rendering said first encoded viewpoint representation for playback; means for receiving, from or along the at least one bitstream, metadata comprising an indication for controlling a switch at least from the first viewpoint representation to the second viewpoint representation; and means for switching, in response to a playback action triggering a viewpoint switch, to decode and render said first encoded viewpoint representation for playback according to said indication.
- An apparatus comprises at least one processor and at least one memory, said at least one memory stored with computer program code thereon, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform: receive at least one bitstream corresponding to at least a first encoded viewpoint representation from mutually related viewpoints comprising said first encoded viewpoint representation and at least a second encoded viewpoint representation of omnidirectional video media content; decode and render said first encoded viewpoint representation for playback; receive, from or along the at least one bitstream, metadata comprising an indication for controlling a switch at least from the first viewpoint representation to the second viewpoint representation; and switch, in response to a playback action triggering a viewpoint switch, to decode and render said first encoded viewpoint representation for playback according to said indication.
- the further aspects relate to apparatuses and computer readable storage media stored with code thereon, which are arranged to carry out the above methods and one or more of the embodiments related thereto.
- Figure 1 shows schematically an electronic device employing embodiments of the invention
- Figure 2 shows schematically a user equipment suitable for employing embodiments of the invention
- Figures 3a and 3b show schematically an encoder and a decoder suitable for implementing embodiments of the invention
- Figure 4 shows an example of MPEG Omnidirectional Media Format (OMAF) concept
- Figures 5a and 5b show two alternative methods for packing 360-degree video content into 2D packed pictures for encoding
- Figure 6 shows the process of forming a monoscopic equirectangular panorama picture.
- Figure 7 shows a flow chart of an encoding method according to an embodiment of the invention.
- Figure 8 shows a flow chart of a decoding method according to an embodiment of the invention.
- Figure 9 shows a schematic diagram of an example multimedia communication system within which various embodiments may be implemented.
- Figure 1 shows a block diagram of a video coding system according to an example embodiment as a schematic block diagram of an exemplary apparatus or electronic device 50, which may incorporate a codec according to an embodiment of the invention.
- Figure 2 shows a layout of an apparatus according to an example embodiment. The elements of Figs. 1 and 2 will be explained next.
- the electronic device 50 may for example be a mobile terminal or user equipment of a wireless communication system. However, it would be appreciated that embodiments of the invention may be implemented within any electronic device or apparatus which may require encoding and decoding or encoding or decoding video images.
- the apparatus 50 may comprise a housing 30 for incorporating and protecting the device.
- the apparatus 50 further may comprise a display 32 in the form of a liquid crystal display.
- the display may be any suitable display technology suitable to display an image or video.
- the apparatus 50 may further comprise a keypad 34.
- any suitable data or user interface mechanism may be employed.
- the user interface may be implemented as a virtual keyboard or data entry system as part of a touch-sensitive display.
- the apparatus may comprise a microphone 36 or any suitable audio input which may be a digital or analogue signal input.
- the apparatus 50 may further comprise an audio output device which in embodiments of the invention may be any one of: an earpiece 38, speaker, or an analogue audio or digital audio output connection.
- the apparatus 50 may also comprise a battery (or in other embodiments of the invention the device may be powered by any suitable mobile energy device such as solar cell, fuel cell or clockwork generator).
- the apparatus may further comprise a camera capable of recording or capturing images and/or video.
- the apparatus 50 may further comprise an infrared port for short range line of sight communication to other devices. In other embodiments the apparatus 50 may further comprise any suitable short range communication solution such as for example a Bluetooth wireless connection or a USB/firewire wired connection.
- the apparatus 50 may comprise a controller 56, processor or processor circuitry for controlling the apparatus 50.
- the controller 56 may be connected to memory 58 which in embodiments of the invention may store both data in the form of image and audio data and/or may also store instructions for implementation on the controller 56.
- the controller 56 may further be connected to codec circuitry 54 suitable for carrying out coding and decoding of audio and/or video data or assisting in coding and decoding carried out by the controller.
- the apparatus 50 may further comprise a card reader 48 and a smart card 46, for example a UICC and UICC reader for providing user information and being suitable for providing authentication information for authentication and authorization of the user at a network.
- the apparatus 50 may comprise radio interface circuitry 52 connected to the controller and suitable for generating wireless communication signals for example for communication with a cellular communications network, a wireless communications system or a wireless local area network.
- the apparatus 50 may further comprise an antenna 44 connected to the radio interface circuitry 52 for transmitting radio frequency signals generated at the radio interface circuitry 52 to other apparatus(es) and for receiving radio frequency signals from other apparatus(es).
- the apparatus 50 may comprise a camera capable of recording or detecting individual frames which are then passed to the codec 54 or the controller for processing.
- the apparatus may receive the video image data for processing from another device prior to transmission and/or storage.
- the apparatus 50 may also receive either wirelessly or by a wired connection the image for coding/decoding.
- the structural elements of apparatus 50 described above represent examples of means for performing a corresponding function.
- a video codec consists of an encoder that transforms the input video into a compressed representation suited for storage/transmission and a decoder that can uncompress the compressed video representation back into a viewable form.
- a video encoder and/or a video decoder may also be separate from each other, i.e. need not form a codec.
- encoder discards some information in the original video sequence in order to represent the video in a more compact form (that is, at lower bitrate).
- Figures 3a and 3b show an encoder and decoder for encoding and decoding the 2D pictures.
- a video codec consists of an encoder that transforms an input video into a compressed representation suited for storage/transmission and a decoder that can uncompress the compressed video representation back into a viewable form.
- the encoder discards and/or loses some information in the original video sequence in order to represent the video in a more compact form (that is, at lower bitrate).
- Figure 3a illustrates an image to be encoded (In); a predicted representation of an image block (P'n); a prediction error signal (Dn); a reconstructed prediction error signal (D'n); a preliminary reconstructed image (I'n); a final reconstructed image (R'n); a transform (T) and inverse transform (T-l); a quantization (Q) and inverse quantization (Q-l); entropy encoding (E); a reference frame memory (RFM); inter prediction (Pinter); intra prediction (Pintra); mode selection (MS) and filtering (F).
- Figure 3b illustrates a predicted representation of an image block (P'n); a reconstructed prediction error signal (D'n); a preliminary reconstructed image (I'n); a final reconstructed image (R'n); an inverse transform (T-l); an inverse quantization (Q-l); an entropy decoding (E-l); a reference frame memory (RFM); a prediction (either inter or intra) (P); and filtering (F).
- H.264/AVC encoders and High Efficiency Video Coding (H.265/HEVC a.k.a. HEVC) encoders encode the video information in two phases. Firstly pixel values in a certain picture area (or “block”) are predicted for example by motion compensation means (finding and indicating an area in one of the previously coded video frames that corresponds closely to the block being coded) or by spatial means (using the pixel values around the block to be coded in a specified manner). Secondly the prediction error, i.e. the difference between the predicted block of pixels and the original block of pixels, is coded. This is typically done by transforming the difference in pixel values using a specified transform (e.g.
- Video codecs may also provide a transform skip mode, which the encoders may choose to use. In the transform skip mode, the prediction error is coded in a sample domain, for example by deriving a sample-wise difference value relative to certain adjacent samples and coding the sample-wise difference value with an entropy coder.
- inter prediction In temporal prediction, the sources of prediction are previously decoded pictures (a.k.a. reference pictures).
- IBC intra block copy
- inter-block-copy prediction prediction is applied similarly to temporal prediction but the reference picture is the current picture and only previously decoded samples can be referred in the prediction process.
- Inter layer or inter- view prediction may be applied similarly to temporal prediction, but the reference picture is a decoded picture from another scalable layer or from another view, respectively.
- inter prediction may refer to temporal prediction only, while in other cases inter prediction may refer collectively to temporal prediction and any of intra block copy, inter-layer prediction, and inter- view prediction provided that they are performed with the same or similar process than temporal prediction.
- Inter prediction or temporal prediction may sometimes be referred to as motion compensation or motion-compensated prediction.
- Inter prediction which may also be referred to as temporal prediction, motion compensation, or motion-compensated prediction, reduces temporal redundancy.
- inter prediction the sources of prediction are previously decoded pictures.
- Intra prediction utilizes the fact that adjacent pixels within the same picture are likely to be correlated.
- Intra prediction can be performed in spatial or transform domain, i.e., either sample values or transform coefficients can be predicted. Intra prediction is typically exploited in intra coding, where no inter prediction is applied.
- motion information is indicated by motion vectors associated with each motion compensated image block.
- Each of these motion vectors represents the displacement of the image block in the picture to be coded (in the encoder) or decoded (at the decoder) and the prediction source block in one of the previously coded or decoded images (or picture).
- One outcome of the coding procedure is a set of coding parameters, such as motion vectors and quantized transform coefficients. Many parameters can be entropy-coded more efficiently if they are predicted first from spatially or temporally neighboring parameters. For example, a motion vector may be predicted from spatially adjacent motion vectors and only the difference relative to the motion vector predictor may be coded. Prediction of coding parameters and intra prediction may be collectively referred to as in-picture prediction.
- ISO International Standards Organization
- ISO International Standards Organization
- MPEG Moving Picture Experts Group
- MP4 Moving Picture Experts Group
- HEVC High Efficiency Video Coding standard
- ISOBMFF International Standards Organization (ISO) base media file format
- MPEG Moving Picture Experts Group
- HEVC High Efficiency Video Coding standard
- ISOBMFF International Standards Organization (ISO) base media file format
- MPEG Moving Picture Experts Group
- HEVC High Efficiency Video Coding standard
- HEVC High Efficiency Video Coding standard
- Some concepts, structures, and specifications of ISOBMFF are described below as an example of a container file format, based on which the embodiments may be implemented.
- the aspects of the invention are not limited to ISOBMFF, but rather the description is given for one possible basis on top of which the invention may be partly or fully realized.
- a basic building block in the ISO base media file format is called a box.
- Each box has a header and a payload.
- the box header indicates the type of the box and the size of the box in terms of bytes.
- a box may enclose other boxes, and the ISO file format specifies which box types are allowed within a box of a certain type. Furthermore, the presence of some boxes may be mandatory in each file, while the presence of other boxes may be optional. Additionally, for some box types, it may be allowable to have more than one box present in a file. Thus, the ISO base media file format may be considered to specify a hierarchical structure of boxes.
- a file includes media data and metadata that are encapsulated into boxes. Each box is identified by a four character code (4CC) and starts with a header which informs about the type and size of the box.
- 4CC four character code
- the media data may be provided in one or more instances of MediaDataBox (‘mdat‘) and the MovieBox (‘moov’) may be used to enclose the metadata for timed media.
- the ‘moov’ box may include one or more tracks, and each track may reside in one corresponding TrackBox (‘trak’).
- Each track is associated with a handler, identified by a four-character code, specifying the track type.
- Video, audio, and image sequence tracks can be collectively called media tracks, and they contain an elementary media stream.
- Other track types comprise hint tracks and timed metadata tracks.
- Tracks comprise samples, such as audio or video frames.
- a media sample may correspond to a coded picture or an access unit.
- a media track refers to samples (which may also be referred to as media samples) formatted according to a media compression format (and its encapsulation to the ISO base media file format).
- a hint track refers to hint samples, containing cookbook instructions for constructing packets for transmission over an indicated communication protocol.
- a timed metadata track may refer to samples describing referred media and/or hint samples.
- a sample grouping in the ISO base media file format and its derivatives, such as the advanced video coding (AVC) file format and the scalable video coding (SVC) file format may be defined as an assignment of each sample in a track to be a member of one sample group, based on a grouping criterion.
- a sample group in a sample grouping is not limited to being contiguous samples and may contain non-adjacent samples. As there may be more than one sample grouping for the samples in a track, each sample grouping may have a type field to indicate the type of grouping.
- Sample groupings may be represented by two linked data structures: (1) a SampleToGroupBox (sbgp box) represents the assignment of samples to sample groups; and (2) a SampleGroupDescriptionBox (sgpd box) contains a sample group entry for each sample group describing the properties of the group. There may be multiple instances of the SampleToGroupBox and SampleGroupDescriptionBox based on different grouping criteria. These may be distinguished by a type field used to indicate the type of grouping.
- SampleToGroupBox may comprise a grouping_type_parameter field that can be used e.g. to indicate a sub-type of the grouping.
- an edit list provides a mapping between the presentation timeline and the media timeline.
- an edit list provides for the linear offset of the presentation of samples in a track, provides for the indication of empty times and provides for a particular sample to be dwelled on for a certain period of time.
- the presentation timeline may be accordingly modified to provide for looping, such as for the looping videos of the various regions of the scene.
- an EditListBox may be contained in EditBox, which is contained in TrackBox ('trak').
- flags specifies the repetition of the edit list.
- setting a specific bit within the box flags (the least significant bit, i.e., flags & 1 in ANSI-C notation, where & indicates a bit-wise AND operation) equal to 0 specifies that the edit list is not repeated, while setting the specific bit (i.e., flags & 1 in ANSI- C notation) equal to 1 specifies that the edit list is repeated.
- the values of box flags greater than 1 may be defined to be reserved for future extensions.
- a Track group enables grouping of tracks based on certain characteristics or the tracks within a group have a particular relationship. Track grouping, however, does not allow any image items in the group.
- TrackGroupBox in ISOBMFF is as follows: aligned (8) class TrackGroupBox extends Box('trgr') ⁇
- track_group_type indicates the grouping_type and shall be set to one of the following values, or a value registered, or a value from a derived specification or registration:
- 'msrc' indicates that this track belongs to a multi-source presentation.
- the tracks that have the same value oftrack_group_id within a TrackGroupTypeBox of track_group_type 'msrc' are mapped as being originated from the same source.
- a recording of a video telephony call may have both audio and video for both participants, and the value oftrack_group_id associated with the audio track and the video track of one participant differs from value oftrack_group_id associated with the tracks of the other participant.
- the pair oftrack_group_id and track_group_type identifies a track group within the file.
- the tracks that contain a particular TrackGroupTypeBox having the same value oftrack_group_id and track_group_type belong to the same track group.
- the Entity grouping is similar to track grouping but enables grouping of both tracks and image items in the same group.
- group_id is a non-negative integer assigned to the particular grouping that shall not be equal to any group_id value of any other EntityToGroupBox, any item_ID value of the hierarchy level (file, movie or track) that contains the GroupsListBox, or any track_ID value (when the GroupsListBox is contained in the file level).
- num_entities_in_group specifies the number ofentity_id values mapped to this entity group.
- entity_id is resolved to an item, when an item with item_ID equal to entity_id is present in the hierarchy level (file, movie or track) that contains the GroupsListBox, or to a track, when a track with track_ID equal to entity_id is present and the GroupsListBox is contained in the file level.
- Files conforming to the ISOBMFF may contain any non-timed objects, referred to as items, meta items, or metadata items, in a meta box (a.k.a. Metabox, four-character code: ‘meta’). While the name of the meta box refers to metadata, items can generally contain metadata or media data.
- the meta box may reside at the top level of the file, within a movie box (four-character code: ‘moov’), and within a track box (four-character code: ‘trak’), but at most one meta box may occur at each of the file level, movie level, or track level.
- the meta box may be required to contain a ‘hdlr’ box indicating the structure or format of the ‘meta’ box contents.
- the meta box may list and characterize any number of items that can be referred and each one of them can be associated with a file name and are uniquely identified with the file by item identifier (item id) which is an integer value.
- the metadata items may be for example stored in the 'idaf box of the meta box or in an 'mdaf box or reside in a separate file. If the metadata is located external to the file then its location may be declared by the DatalnformationBox (four-character code: ‘dinf ).
- the metadata may be encapsulated into either the XMLBox (four- character code: ‘xml ‘) or the BinaryXMLBox (four-character code: ‘bxmT).
- An item may be stored as a contiguous byte range, or it may be stored in several extents, each being a contiguous byte range. In other words, items may be stored fragmented into extents, e.g. to enable interleaving.
- An extent is a contiguous subset of the bytes of the resource. The resource can be formed by concatenating the extents.
- the ItemPropertiesBox enables the association of any item with an ordered set of item properties. Item properties may be regarded as small data records.
- the ItemPropertiesBox consists of two parts: ItemPropertyContainerBox that contains an implicitly indexed list of item properties, and one or more ItemPropertyAssociationBox(es) that associate items with item properties.
- Hypertext Transfer Protocol has been widely used for the delivery of real time multimedia content over the Internet, such as in video streaming applications.
- HTTP Hypertext Transfer Protocol
- 3GPP 3rd Generation Partnership Project
- PSS packet-switched streaming
- MPEG took 3GPP AHS Release 9 as a starting point for the MPEG DASH standard (ISO/IEC 23009-1 : “Dynamic adaptive streaming over HTTP (DASH)-Part 1 : Media presentation description and segment formats,”).
- MPEG DASH and 3GP-DASH are technically close to each other and may therefore be collectively referred to as DASH.
- Some concepts, formats and operations of DASH are described below as an example of a video streaming system, wherein the embodiments may be implemented.
- the aspects of the invention are not limited to the above standard documents but rather the description is given for one possible basis on top of which the invention may be partly or fully realized.
- the multimedia content may be stored on an HTTP server and may be delivered using HTTP.
- the content may be stored on the server in two parts: Media Presentation Description (MPD), which describes a manifest of the available content, its various alternatives, their URL addresses, and other characteristics; and segments, which contain the actual multimedia bitstreams in the form of chunks, in a single or multiple files.
- MPD Media Presentation Description
- the MDP provides the necessary information for clients to establish a dynamic adaptive streaming over HTTP.
- the MPD contains information describing media presentation, such as an HTTP -uniform resource locator (URL) of each Segment to make GET Segment request.
- URL HTTP -uniform resource locator
- the DASH client may obtain the MPD e.g. by using HTTP, email, thumb drive, broadcast, or other transport methods.
- the DASH client may become aware of the program timing, media-content availability, media types, resolutions, minimum and maximum bandwidths, and the existence of various encoded alternatives of multimedia components, accessibility features and required digital rights management (DRM), media-component locations on the network, and other content characteristics. Using this information, the DASH client may select the appropriate encoded alternative and start streaming the content by fetching the segments using e.g. HTTP GET requests. After appropriate buffering to allow for network throughput variations, the client may continue fetching the subsequent segments and also monitor the network bandwidth fluctuations. The client may decide how to adapt to the available bandwidth by fetching segments of different alternatives (with lower or higher bitrates) to maintain an adequate buffer.
- DRM digital rights management
- a media content component or a media component may be defined as one continuous component of the media content with an assigned media component type that can be encoded individually into a media stream.
- Media content may be defined as one media content period or a contiguous sequence of media content periods.
- Media content component type may be defined as a single type of media content such as audio, video, or text.
- a media stream may be defined as an encoded version of a media content component.
- a hierarchical data model is used to structure media presentation as follows.
- a media presentation consists of a sequence of one or more Periods, each Period contains one or more Groups, each Group contains one or more Adaptation Sets, each Adaptation Sets contains one or more Representations, each Representation consists of one or more Segments.
- a Group may be defined as a collection of Adaptation Sets that are not expected to be presented simultaneously.
- An Adaptation Set may be defined as a set of interchangeable encoded versions of one or several media content components.
- a Representation is one of the alternative choices of the media content or a subset thereof typically differing by the encoding choice, e.g. by bitrate, resolution, language, codec, etc.
- the Segment contains certain duration of media data, and metadata to decode and present the included media content.
- a Segment is identified by a URI and can typically be requested by a HTTP GET request.
- a Segment may be defined as a unit of data associated with an HTTP- URL and optionally a byte range that are specified by an MPD.
- the DASH MPD complies with Extensible Markup Language (XML) and is therefore specified through elements and attributes as defined in XML.
- the MPD may be specified using the following conventions: Elements in an XML document may be identified by an upper-case first letter and may appear in bold face as Element. To express that an element Elementl is contained in another element Element2, one may write Element2.Elementl. If an element’s name consists of two or more combined words, camel casing may be used, such as ImportantElement, for example. Elements may be present either exactly once, or the minimum and maximum occurrence may be defined by ⁇ minOccurs> ... ⁇ maxOccurs>.
- Attributes in an XML document may be identified by a lower-case first letter as well as they may be preceded by a ‘@’-sign, e.g. @attribute, for example.
- a ‘@’-sign e.g. @attribute, for example.
- Attributes may have assigned a status in the XML as mandatory (M), optional (O), optional with default value (OD) and conditionally mandatory (CM).
- descriptor elements are typically structured in the same way, in that they contain a @schemeIdUri attribute that provides a URI to identify the scheme and an optional attribute @value and an optional attribute @id.
- the semantics of the element are specific to the scheme employed.
- the URI identifying the scheme may be a URN or a URL.
- Some descriptors are specified in MPEG-DASH (ISO/IEC 23009-1), while descriptors can additionally or alternatively be specified in other specifications. When specified in specifications other than MPEG-DASH, the MPD does not provide any specific information on how to use descriptor elements. It is up to the application or specification that employs DASH formats to instantiate the description elements with appropriate scheme information.
- Scheme Identifier in the form of a URI and the value space for the element when that Scheme Identifier is used.
- the Scheme Identifier appears in the @schemeIdUri attribute.
- a text string may be defined for each value and this string may be included in the @value attribute.
- any extension element or attribute may be defined in a separate namespace.
- the @id value may be used to refer to a unique descriptor or to a group of descriptors. In the latter case, descriptors with identical values for the attribute @id may be required to be synonymous, i.e.
- equivalence may refer to lexical equivalence as defined in clause 5 of RFC 2141. If the @schemeIdUri is a URL, then equivalence may refer to equality on a character-for-character basis as defined in clause 6.2.1 of RFC3986. If the @value attribute is not present, equivalence may be determined by the equivalence for @schemeIdUri only. Attributes and element in extension namespaces might not be used for determining equivalence. The @id attribute may be ignored for equivalence determination.
- MPEG-DASH specifies descriptors EssentialProperty and SupplementalProperty.
- EssentialProperty the Media Presentation author expresses that the successful processing of the descriptor is essential to properly use the information in the parent element that contains this descriptor unless the element shares the same @id with another EssentialProperty element. If EssentialProperty elements share the same @id, then processing one of the EssentialProperty elements with the same value for @id is sufficient. At least one EssentialProperty element of each distinct @id value is expected to be processed. If the scheme or the value for an EssentialProperty descriptor is not recognized the DASH client is expected to ignore the parent element that contains the descriptor. Multiple EssentialProperty elements with the same value for @id and with different values for @id may be present in an MPD.
- the Media Presentation author expresses that the descriptor contains supplemental information that may be used by the DASH client for optimized processing. If the scheme or the value for a SupplementalProperty descriptor is not recognized the DASH client is expected to ignore the descriptor. Multiple SupplementalProperty elements may be present in an MPD.
- MPEG-DASH specifies a Viewpoint element that is formatted as a property descriptor.
- the @schemeIdUri attribute of the Viewpoint element is used to identify the viewpoint scheme employed.
- Adaptation Sets containing non-equivalent Viewpoint element values contain different media content components.
- the Viewpoint elements may equally be applied to media content types that are not video.
- Adaptation Sets with equivalent Viewpoint element values are intended to be presented together. This handling should be applied equally for recognized and unrecognized @schemeIdUri values.
- An Initialization Segment may be defined as a Segment containing metadata that is necessary to present the media streams encapsulated in Media Segments.
- an Initialization Segment may comprise the Movie Box ('moov') which might not include metadata for any samples, i.e. any metadata for samples is provided in 'moof boxes.
- a Media Segment contains certain duration of media data for playback at a normal speed, such duration is referred as Media Segment duration or Segment duration.
- the content producer or service provider may select the Segment duration according to the desired characteristics of the service. For example, a relatively short Segment duration may be used in a live service to achieve a short end-to-end latency. The reason is that Segment duration is typically a lower bound on the end-to-end latency perceived by a DASH client since a Segment is a discrete unit of generating media data for DASH. Content generation is typically done such a manner that a whole Segment of media data is made available for a server. Furthermore, many client implementations use a Segment as the unit for GET requests.
- a Segment can be requested by a DASH client only when the whole duration of Media Segment is available as well as encoded and encapsulated into a Segment.
- different strategies of selecting Segment duration may be used.
- a Segment may be further partitioned into Subsegments to enable downloading segments in multiple parts, for example.
- Subsegments may be required to contain complete access units.
- Subsegments may be indexed by Segment Index box, which contains information to map presentation time range and byte range for each Subsegment.
- the Segment Index box may also describe subsegments and stream access points in the segment by signaling their durations and byte offsets.
- a DASH client may use the information obtained from Segment Index box(es) to make a HTTP GET request for a specific Subsegment using byte range HTTP request. If a relatively long Segment duration is used, then Subsegments may be used to keep the size of HTTP responses reasonable and flexible for bitrate adaptation.
- the indexing information of a segment may be put in the single box at the beginning of that segment or spread among many indexing boxes in the segment.
- Different methods of spreading are possible, such as hierarchical, daisy chain, and hybrid, for example. This technique may avoid adding a large box at the beginning of the segment and therefore may prevent a possible initial download delay.
- Sub-Representations are embedded in regular Representations and are described by the SubRepresentation element.
- SubRepresentation elements are contained in a Representation element.
- the SubRepresentation element describes properties of one or several media content components that are embedded in the Representation. It may for example describe the exact properties of an embedded audio component (such as codec, sampling rate, etc., for example), an embedded sub-title (such as codec, for example) or it may describe some embedded lower quality video layer (such as some lower frame rate, or otherwise, for example).
- Sub-Representations and Representation share some common attributes and elements. In case the @level attribute is present in the SubRepresentation element, the following applies:
- Sub-Representations provide the ability for accessing a lower quality version of the Representation in which they are contained.
- Sub-Representations for example allow extracting the audio track in a multiplexed Representation or may allow for efficient fast-forward or rewind operations if provided with lower frame rate;
- the Initialization Segment and/or the Media Segments and/or the Index Segments shall provide sufficient information such that the data can be easily accessed through HTTP partial GET requests. The details on providing such information are defined by the media format in use.
- the Initialization Segment contains the Fevel Assignment box.
- the Subsegment Index box (‘ssix’) is present for each Subsegment.
- the attribute @level specifies the level to which the described Sub-Representation is associated to in the Subsegment Index.
- the information in Representation, Sub- Representation and in the Fevel Assignment (‘leva’) box contains information on the assignment of media data to levels.
- Media data should have an order such that each level provides an enhancement compared to the lower levels.
- Level Assignment box When the Level Assignment box is present, it applies to all movie fragments subsequent to the initial movie.
- a fraction is defined to consist of one or more Movie Fragment boxes and the associated Media Data boxes, possibly including only an initial part of the last Media Data Box.
- data for each level appears contiguously.
- Data for levels within a fraction appears in increasing order of level value. All data in a fraction is assigned to levels.
- the Level Assignment box provides a mapping from features, such as scalability layers or temporal sub-layers, to levels.
- a feature can be specified through a track, a sub-track within a track, or a sample grouping of a track.
- the Temporal Level sample grouping may be used to indicate a mapping of the pictures to temporal levels, which are equivalent to temporal sub-layers in HEVC. That is, HEVC pictures of a certain Temporalld value may be mapped to a particular temporal level using the Temporal Level sample grouping (and the same can be repeated for all Temporalld values).
- the Level Assignment box can then refer to the Temporal Level sample grouping in the indicated mapping to levels.
- the Subsegment Index box (’ssix’) provides a mapping from levels (as specified by the Level Assignment box) to byte ranges of the indexed subsegment.
- this box provides a compact index for how the data in a subsegment is ordered according to levels into partial subsegments. It enables a client to easily access data for partial subsegments by downloading ranges of data in the subsegment.
- each byte in the subsegment is assigned to a level. If the range is not associated with any information in the level assignment, then any level that is not included in the level assignment may be used.
- Subsegment Index boxes present per each Segment Index box that indexes only leaf subsegments, i.e. that only indexes subsegments but no segment indexes.
- a Subsegment Index box if any, is the next box after the associated Segment Index box.
- a Subsegment Index box documents the subsegment that is indicated in the immediately preceding Segment Index box.
- Each level may be assigned to exactly one partial subsegment, i.e. byte ranges for one level are contiguous.
- Levels of partial subsegments are assigned by increasing numbers within a subsegment, i.e., samples of a partial subsegment may depend on any samples of preceding partial subsegments in the same subsegment, but not the other way around. For example, each partial subsegment contains samples having an identical temporal sub-layer and partial subsegments appear in increasing temporal sub-layer order within the subsegment.
- the final Media Data box may be incomplete, that is, less data is accessed than the length indication of the Media Data Box indicates is present.
- the length of the Media Data box may need adjusting, or padding may be used.
- the padding flag in the Level Assignment Box indicates whether this missing data can be replaced by zeros. If not, the sample data for samples assigned to levels that are not accessed is not present, and care should be taken.
- Virtual reality is a rapidly developing area of technology in which image or video content, sometimes accompanied by audio, is provided to a user device such as a user headset (a.k.a. head-mounted display, HMD).
- a user headset a.k.a. head-mounted display, HMD
- the user device may be provided with a live or stored feed from a content source, the feed representing a virtual space for immersive output through the user device.
- immersive multimedia such as omnidirectional content consumption, is more complex to encode and decode for the end user. This is due to the higher degree of freedom available to the end user.
- 3DoF three degrees of freedom
- Omnidirectional may refer to media content that has greater spatial extent than a field-of-view of a device rendering the content.
- Omnidirectional content may for example cover substantially 360 degrees in the horizontal dimension and substantially 180 degrees in the vertical dimension, but omnidirectional may also refer to content covering less than 360 degree view in the horizontal direction and/or 180 degree view in the vertical direction.
- VR video may sometimes be used interchangeably. They may generally refer to video content that provides such a large field of view that only a part of the video is displayed at a single point of time in typical displaying arrangements.
- VR video may be viewed on a head-mounted display (HMD) that may be capable of displaying e.g. about 100-degree field of view.
- the spatial subset of the VR video content to be displayed may be selected based on the orientation of the HMD.
- a typical flat-panel viewing environment is assumed, wherein e.g. up to 40- degree field-of-view may be displayed.
- wide-FOV content e.g.
- MPEG Omnidirectional Media Format (ISO/IEC 23090-2) is a virtual reality (VR) system standard.
- OMAF defines a media format (comprising both file format derived from ISOBMFF and streaming formats for DASH and MPEG Media Transport).
- OMAF version 1 supports 360° video, images, and audio, as well as the associated timed text and facilitates three degrees of freedom (3DoF) content consumption, meaning that a viewport can be selected with any azimuth and elevation range and tilt angle that are covered by the omnidirectional content but the content is not adapted to any translational changes of the viewing position.
- 3DoF degrees of freedom
- OMAF v2 is planned to include features like support for multiple viewpoints, overlays, sub-picture compositions, and six degrees of freedom with a viewing space limited roughly to upper-body movements only.
- a viewpoint may be defined as the point or space from which the user views the scene; it usually corresponds to a camera position. Slight head motion does not imply a different viewpoint.
- observation point or Viewpoint refers to a volume in a three-dimensional space for virtual reality audio/video acquisition or playback.
- a Viewpoint is trajectory, such as a circle, a region, or a volume, around the centre point of a device or rig used for omnidirectional audio/video acquisition and the position of the observer's head in the three-dimensional space in which the audio and video tracks are located.
- an observer's head position is tracked and the rendering is adjusted for head movements in addition to head rotations, and then a Viewpoint may be understood to be an initial or reference position of the observer's head.
- each observation point may be defined as a viewpoint by a viewpoint property descriptor.
- the definition may be stored in ISOBMFF or OMAF type of file format.
- the delivery could be HFS (HTTP Five Streaming), RTSP/RTP (Real Time Streaming Protocol/Real-time Transport Protocol) streaming in addition to DASH.
- the term “spatially related Viewpoint group” refers to Viewpoints which have content that has a spatial relationship between them. For example, content captured by VR cameras at different locations in the same basketball court or a music concert captured from different locations on the stage.
- the term “logically related Viewpoint group” refers to related Viewpoints which do not have a clear spatial relationship but are logically related. The relative position of logically related Viewpoints is described based on the creative intent. For example, two Viewpoints that are members of a logically related Viewpoint group may correspond to content from the performance area and the dressing room. Another example could be two Viewpoints from the dressing rooms of the two competing teams that form a logically related Viewpoint group to permit users to traverse between both teams to see the player reactions.
- Viewpoints qualifying according to above definitions of the spatially related Viewpoint group and logically related Viewpoint group may be commonly referred to as mutually related Viewpoints, sometimes also as a mutually related Viewpoint group.
- random access may refer to the ability of a decoder to start decoding a stream at a point other than the beginning of the stream and recover an exact or approximate reconstructed media signal, such as a representation of the decoded pictures.
- a random access point and a recovery point may be used to characterize a random access operation.
- a random access point may be defined as a location in a media stream, such as an access unit or a coded picture within a video bitstream, where decoding can be initiated.
- a recovery point may be defined as a first location in a media stream or within the reconstructed signal characterized in that all media, such as decoded pictures, at or subsequent to a recovery point in output order are correct or approximately correct in content, when the decoding has started from the respective random access point. If the random access point is the same as the recovery point, the random access operation is instantaneous; otherwise, it may be gradual.
- Random access points enable, for example, seek, fast forward play, and fast backward play operations in locally stored media streams as well as in media streaming.
- servers can respond to seek requests by transmitting data starting from the random access point that is closest to (and in many cases preceding) the requested destination of the seek operation and/or decoders can start decoding from the random access point that is closest to (and in many cases preceding) the requested destination of the seek operation.
- Switching between coded streams of different bit-rates is a method that is used commonly in unicast streaming to match the transmitted bitrate to the expected network throughput and to avoid congestion in the network. Switching to another stream is possible at a random access point.
- random access points enable tuning in to a broadcast or multicast.
- a random access point can be coded as a response to a scene cut in the source sequence or as a response to an intra picture update request.
- MPEG Omnidirectional Media Format is described in the following by referring to Figure 4.
- a real-world audio-visual scene (A) is captured by audio sensors as well as a set of cameras or a camera device with multiple lenses and sensors.
- the acquisition results in a set of digital image/video (Bi) and audio (Ba) signals.
- the cameras/lenses typically cover all directions around the center point of the camera set or camera device, thus the name of 360-degree video.
- Audio can be captured using many different microphone configurations and stored as several different content formats, including channel-based signals, static or dynamic (i.e. moving through the 3D scene) object signals, and scene-based signals (e.g., Higher Order Ambisonics).
- the channel-based signals typically conform to one of the loudspeaker layouts defined in CICP.
- the loudspeaker layout signals of the rendered immersive audio program are binaraulized for presentation via headphones.
- the images (Bi) of the same time instance are stitched, projected, and mapped onto a packed picture (D).
- Input images (Bi) are stitched and projected onto a three-dimensional projection structure that may for example be a unit sphere.
- the projection structure may be considered to comprise one or more surfaces, such as plane(s) or part(s) thereof.
- a projection structure may be defined as three-dimensional structure consisting of one or more surface(s) on which the captured VR image/video content is projected, and from which a respective projected picture can be formed.
- the image data on the projection structure is further arranged onto a two-dimensional projected picture (C).
- projection may be defined as a process by which a set of input images are projected onto a projected frame.
- representation formats including for example an equirectangular projection (ERP) format and a cube map projection (CMP) format. It may be considered that the projected picture covers the entire sphere.
- region- wise packing is then applied to map the projected picture onto a packed picture. If the region-wise packing is not applied, the packed picture is identical to the projected picture, and this picture is given as input to image/video encoding. Otherwise, regions of the projected picture are mapped onto a packed picture (D) by indicating the location, shape, and size of each region in the packed picture, and the packed picture (D) is given as input to image/video encoding.
- region-wise packing may be defined as a process by which a projected picture is mapped to a packed picture.
- packed picture may be defined as a picture that results from region- wise packing of a projected picture.
- Input images (Bi) are stitched and projected onto two three-dimensional projection structures, one for each eye.
- the image data on each projection structure is further arranged onto a two-dimensional projected picture (CL for left eye, CR for right eye), which covers the entire sphere.
- Frame packing is applied to pack the left view picture and right view picture onto the same projected picture.
- region- wise packing is then applied to the pack projected picture onto a packed picture, and the packed picture (D) is given as input to image/video encoding. If the region-wise packing is not applied, the packed picture is identical to the projected picture, and this picture is given as input to image/video encoding.
- the image stitching, projection, and region-wise packing process can be carried out multiple times for the same source images to create different versions of the same content, e.g. for different orientations of the projection structure.
- the region- wise packing process can be performed multiple times from the same projected picture to create more than one sequence of packed pictures to be encoded.
- 360-degree panoramic content i.e., images and video
- the vertical field-of-view may vary and can be e.g. 180 degrees.
- Panoramic image covering 360-degree field-of-view horizontally and 180-degree field-of-view vertically can be represented by a sphere that can be mapped to a bounding cylinder that can be cut vertically to form a 2D picture (this type of projection is known as equirectangular projection).
- This type of projection is known as equirectangular projection.
- the process of forming a monoscopic equirectangular panorama picture is illustrated in Figure 6.
- a set of input images such as fisheye images of a camera array or a camera device with multiple lenses and sensors, is stitched onto a spherical image.
- the spherical image is further projected onto a cylinder (without the top and bottom faces).
- the cylinder is unfolded to form a two- dimensional projected frame.
- one or more of the presented steps may be merged; for example, the input images may be directly projected onto a cylinder without an intermediate projection onto a sphere.
- the projection structure for equirectangular panorama may be considered to be a cylinder that comprises a single surface.
- 360-degree content can be mapped onto different types of solid geometrical structures, such as polyhedron (i.e. a three-dimensional solid object containing flat polygonal faces, straight edges and sharp comers or vertices, e.g., a cube or a pyramid), cylinder (by projecting a spherical image onto the cylinder, as described above with the equirectangular projection), cylinder (directly without projecting onto a sphere first), cone, etc. and then unwrapped to a two-dimensional image plane.
- polyhedron i.e. a three-dimensional solid object containing flat polygonal faces, straight edges and sharp comers or vertices, e.g., a cube or a pyramid
- cylinder by projecting a spherical image onto the cylinder, as described above with the equirectangular projection
- cylinder directly without projecting onto a sphere first
- cone etc. and then unwrapped to a two-dimensional image plane.
- panoramic content with 360-degree horizontal field-of-view but with less than 180-degree vertical field-of-view may be considered special cases of panoramic projection, where the polar areas of the sphere have not been mapped onto the two- dimensional image plane.
- a panoramic image may have less than 360-degree horizontal field-of-view and up to 180-degree vertical field-of-view, while otherwise has the characteristics of panoramic projection format.
- OMAF allows the omission of image stitching, projection, and region- wise packing and encode the image/video data in their captured format.
- images D are considered the same as images Bi and a limited number of fisheye images per time instance are encoded.
- the stitched images (D) are encoded as coded images (Ei) or a coded video bitstream (Ev).
- the captured audio (Ba) is encoded as an audio bitstream (Ea).
- the coded images, video, and/or audio are then composed into a media file for file playback (F) or a sequence of an initialization segment and media segments for streaming (Fs), according to a particular media container file format.
- the media container file format is the ISO base media file format.
- the file encapsulator also includes metadata into the file or the segments, such as projection and region- wise packing information assisting in rendering the decoded packed pictures.
- the metadata in the file may include:
- the file that the file encapsulator outputs (F) is identical to the file that the file decapsulator inputs (F').
- a file decapsulator processes the file (F') or the received segments (F’s) and extracts the coded bitstreams (E’a, EV, and/or E’i) and parses the metadata.
- the audio, video, and/or images are then decoded into decoded signals (B'a for audio, and D' for images/video).
- the decoded packed pictures (D') are projected onto the screen of a head- mounted display or any other display device based on the current viewing orientation or viewport and the projection, spherical coverage, projection structure orientation, and region- wise packing metadata parsed from the file.
- decoded audio (B'a) is rendered, e.g. through headphones, according to the current viewing orientation.
- the current viewing orientation is determined by the head tracking and possibly also eye tracking functionality. Besides being used by the renderer to render the appropriate part of decoded video and audio signals, the current viewing orientation may also be used the video and audio decoders for decoding optimization.
- HHFoV horizontal and vertical FoVs
- DHFoV horizontal and vertical directions
- a video rendered by an application on a HMD renders a portion of the 360 degrees video. This portion is defined here as viewport.
- a viewport may be defined as a region of omnidirectional image or video suitable for display and viewing by the user.
- a current viewport (which may be sometimes referred simply as a viewport) may be defined as the part of the spherical video that is currently displayed and hence is viewable by the user(s).
- a video rendered by an application on a head-mounted display renders a portion of the 360-degrees video, which is referred to as a viewport.
- a viewport is a window on the 360- degree world represented in the omnidirectional video displayed via a rendering display.
- a viewport may be characterized by a horizontal field-of-view (VHFoV) and a vertical field-of- view (VVFoV).
- VHFoV horizontal field-of-view
- VVFoV vertical field-of- view
- the horizontal field-of-view of the viewport will be abbreviated with HFoV and, respectively, the vertical field-of-view of the viewport will be abbreviated with VFoV.
- a sphere region may be defined as a region on a sphere that may be specified by four great circles or by two azimuth circles and two elevation circles and additionally by a tile angle indicating rotation along the axis originating from the sphere origin passing through the center point of the sphere region.
- a great circle may be defined as an intersection of the sphere and a plane that passes through the center point of the sphere.
- a great circle is also known as an orthodrome or Riemannian circle.
- An azimuth circle may be defined as a circle on the sphere connecting all points with the same azimuth value.
- An elevation circle may be defined as a circle on the sphere connecting all points with the same elevation value.
- the coordinate system of OMAF consists of a unit sphere and three coordinate axes, namely the X (back-to-front) axis, the Y (lateral, side-to-side) axis, and the Z (vertical, up) axis, where the three axes cross at the centre of the sphere.
- the location of a point on the sphere is identified by a pair of sphere coordinates azimuth (f) and elevation (Q).
- OMAF Omnidirectional Media Format
- OMAF version 2 introduces new concepts such as Viewpoints and specifies ISOBMFF metadata and DASH MPD signaling for Viewpoints.
- OMAF version 2 allows Viewpoints to be static, e.g., Viewpoints may be captured by 360° video cameras at fixed positions. Moreover, OMAF version 2 allows Viewpoints to be dynamic, e.g., a Viewpoint may be captured by a 360° video camera mounted on a flying drone. Metadata and signaling for both static and dynamic Viewpoints are supported in OMAF v2. [0121 ] OMAF version 2 enables the switching between mutually related Viewpoints to be seamless in the sense that after switching the user still sees the same object, e.g., the same player in a sport game, just from a different viewing angle.
- viewpoint group is defined in OMAF version 2 and may comprise mutually related Viewpoints. However, when Viewpoints are not mutually related, switching between the two Viewpoints may incur a noticeable cut or transition.
- OMAF v2 specifies the viewpoint entity grouping.
- Other possible file format mechanisms for this purpose in addition to or instead of entity grouping include a track group.
- metadata for viewpoint is signaled, to provide an identifier (ID) of the Viewpoint and a set of other information that can be used to assist streaming of the content and switching between different Viewpoints.
- ID identifier
- Such information may include:
- a (textual) label for annotation of the viewpoint.
- ⁇ Mapping of the viewpoint to a viewpoint group of an indicated viewpoint group ID This information provides a means to indicate whether the switching between two particular viewpoints can be seamless, and if not, the client does not need to bother trying it.
- Viewpoint position relative to the common reference coordinate system shared by all viewpoints of a viewpoint group is to enable a good user experience during viewpoint switching, provided that the client can properly utilize the positions of the two viewpoints involved in the switching in its rendering processing.
- Rotation information for conversion between the global coordinate system of the viewpoint relative to the common reference coordinate system is
- rotation information for conversion between the common reference coordinate system and the compass points such as the geomagnetic north.
- the GPS position of the viewpoint which enables the client application positioning of a viewpoint into a real-world map, which can be user-friendly in certain scenarios.
- viewpoint switching information which provides a number of switching transitions possible from the current viewpoint, and for each of these, information such as the destination viewpoint, the viewport to view after switching, the presentation time to start the playing back the destination viewpoint, and a recommended transition effect during switching (such as zoom-in, walk though, fade- to-black, or mirroring).
- viewpoint looping information indicating which time period of the presentation is looped and a maximum count how many times the time period is looped.
- the looping feature can be used for requesting end-user's input for initiating viewpoint switching.
- the above information may be stored in timed metadata track(s) that may be time-synchronized with the media track(s).
- ViewpointPosStruct() ⁇ signed int(32) viewpoint_pos_x; signed int(32) viewpoint_pos_y; signed int(32) viewpoint_pos_z;
- class ViewpointGpsPositionStruct() ⁇ signed int(32) viewpoint_gpspos_longitude; signed int(32) viewpoint_gpspos_latitude; signed int(32) viewpoint_gpspos_altitude;
- a Viewpoint element with a @schemeIdUri attribute equal to "um:mpeg:mpegl:omaf:2018:vwpt" is referred to as a viewpoint information (VWPT) descriptor.
- VWPT viewpoint information
- the @value specifies the viewpoint ID of the viewpoint.
- the ViewPointlnfo is a Container element whose sub-elements and attributes provide information about the viewpoint.
- the ViewPointInfo@label attribute specifies a string that provides human readable label for the viewpoint.
- Position attributes of this element specify the position information for the viewpoint.
- Viewpoints may, for example, represent different viewing position to the same scene, or provide completely different scenes, e.g. in a virtual tourist tour.
- viewpoints which are part of the same group or viewpoints with common visual scene.
- Viewpoints may be used to realize alternative storylines.
- Several options for user- originated Viewpoint switching may be indicated and associated with different user interactions, such as different selectable regions for activating a switch to a particular Viewpoint.
- Viewpoint looping may be indicated and used e.g. for waiting end-user's choice between switching options.
- Viewpoint switching in DASH streaming involves switching DASH adaptation sets. Network delay and bandwidth availability may lead to latency in delivery of the content corresponding to the destination viewpoint. Hence, the switch is hardly ever immediate, but either paused video (last decoded frame from the original viewpoint) or black frames may be shown to user:
- players with a single decoder can start decoding the destination viewpoint only after stopping the decoding of the current viewpoint. This can create a gap especially since hierarchical video coding structures delay the output from the decoder; the decoder may be able to output frames only after it has processed e.g. 8 frames.
- the existing OMAF v2 specification draft includes signaling for a possible transition effect to take place when switching between viewpoints. Some of the transition effects, however, require having decoded content available from both viewpoints, whereas some require additional decoding resources, hence making them more difficult to use in resource-constrained systems. Hence, it may be expected that switching without any transition effect is most common way to switch viewpoints.
- the ViewpointTimelineSwitchStruct contains the parameters t_min and t_max, which set the limits when the switch is enabled. E.g. if there is a viewpoint that is active only for 30 seconds, starting from 1 minute from the content start, then t_min is set as 60 sec, and t_max as 90 sec.
- the offsets (absolute and relative), on the other hand, specify the position in the destination viewpoint timeline where the playback must start. E.g. if there is a viewpoint that can be used to show again past events, then the offset is set accordingly.
- there is no signaling or hardcoded specification what should happen with the current viewpoint data during switching, i.e. whether the player should continue playing it or not. In practice, there is typically a delay in the order of hundreds of milliseconds to a few seconds before the playback of the new viewpoint can start due to network latencies etc.
- the OMAF v2 specification allows also automated viewpoint switching to take place when a specified number (1 or more) of loops for the viewpoint content has been played, or if recommended viewport for multiple viewpoints timed metadata track is used.
- players should be able to prepare for the switch by prefetching media for the next viewpoint.
- a viewpoint switch may come as a surprise to the player, similar to user interaction.
- the method according to an aspect comprises encoding (700) omnidirectional video media content into at least a first viewpoint representation and a second viewpoint representation, said first and second viewpoint representation being associated with mutually related viewpoints; and encoding (700) metadata, in or along a bitstream comprising at the encoded first viewpoint representation, said metadata comprising an indication for controlling a switch at least from the first viewpoint representation to the second viewpoint representation upon playback.
- the content author may provide an indication for controlling the viewpoint switch e.g. from the first viewpoint representation to the second viewpoint representation in a manner considered desirable by the content author. Knowing the type of content in the first and the second viewpoint representation associated with the mutually related viewpoints, the content author may have the best knowledge of what kind of viewpoint switch would be preferable between the first and the second viewpoint representations. For ensuring a satisfying user experience, the indication provided by the content author for controlling the viewpoint switch may provide the best results.
- said indication comprises at least one parameter indicating at least one of the following: - in response to a playback action triggering a viewpoint switch, the switch from the first viewpoint representation to the second viewpoint representation is to be started at a next available switch point;
- the switch from the first viewpoint representation to the second viewpoint representation is to be delayed until the second viewpoint representation has been decoded ready for rendering.
- the content author may provide at least two options for controlling the viewpoint switch: 1) switch to the second viewpoint immediately, or at least as soon as possible, i.e. at the next available switch point, or 2) continue playback of the first viewpoint until the second viewpoint is ready to be rendered.
- the content of the second viewpoint may not be ready for rendering, and a black screen or a transition effect may be displayed while waiting the second viewpoint to be ready for rendering.
- an indication about a viewpoint switch being in progress may be displayed.
- the playback action triggering the viewpoint switch may relate at least to a user interaction, such as a user of a HMD turning and/or tilting head to a new viewport or a signalled request from the user to switch another viewpoint.
- the playback action triggering the viewpoint switch may also relate to an error situation in the playback, e.g. the receiving, decoding and/or rendering of the first viewpoint representation is interrupted for some reason, and this triggers the viewpoint switch to the second viewpoint representation.
- said indication is configured to be encoded as a flag indicating whether the switch from the first viewpoint representation to the second viewpoint representation is to be started at the next available switch point or to be delayed until the second viewpoint representation has been decoded ready for rendering.
- a flag which may be referred to herein as switch_type_f lag, is used for indicating which one of the two options for controlling the viewpoint switch should be applied upon a playback action triggering a viewpoint switch.
- a signalling of said indication is configured to be carried out by at least one syntax element included in a ViewpointTimelineSwitchStruct syntax structure or any other suitable syntax structure for ISO/IEC 23090-2 (or similar omnidirectional media coding technology).
- switch_type_flag 1
- the player should respond to the user interaction (or signaled request) immediately and switch to the new viewpoint immediately (or as soon as possible).
- the switch may take place with a period of black screen or via a transition effect, if a transition effect is configured for the switch.
- the player should try to ensure video and audio content continuity over the switch and continue playing the current viewpoint until playable content is available for the destination viewpoint.
- the player may show some indication that the switch is in progress.
- a signalling of said indication is configured to be carried out by at least one syntax element included in a VWPT descriptor, e.g. within its ViewPointlnfo element or any other suitable syntax structure for ISO/IEC 23090-2 (or similar omnidirectional media coding technology).
- audio may be common between the viewpoints, and it can continue uninterrupted over the viewpoint switch.
- a player concludes whether audio is common between the viewpoints. For example, when the same audio track is included in a first viewpoint entity group and a second viewpoint entity group, representing viewpoints between which the viewpoint switching takes place, the player may conclude that the audio is common between the viewpoints. Otherwise, the player may conclude that the audio is not common between the viewpoints. When the audio is concluded to be common between the viewpoints, the player continues the audio decoding and playback in an uninterrupted manner.
- the desired player behavior may be signaled as a transition effect. Especially, along with more detailed player implementation signaling the player behavior as a transition effect may be expected to be applicable to any viewpoint switch.
- a signalling of said indication is configured to be carried out by at least one syntax element included in a ViewpointSwitchingListStruct syntax structure or any other suitable syntax structure for ISO/IEC 23090-2 (or similar omnidirectional media coding technology).
- the switch_type_flag is included in the ViewpointSwitchingListStruct ( ) , if the timeline_switch_offset_flag is equal to 0.
- a second parameter may be encoded in the metadata for indicating a timeout value for the switch from the first viewpoint representation to the second viewpoint representation to be completed.
- the signaling could be enhanced by a timeout value, or as a combination of the indication (e.g. the flag) and an additional parameter for the timeout value.
- the timeout value may indicate the time period in which the player is expected to complete the viewpoint switch. This may require the player to choose a bandwidth representation of the destination viewport that meets the recent network conditions, if available.
- the timeout value may be incorporated in the WiewpointTimelineSwitchStruct ( ) or ViewpointSwitchingListStruct ( ) in the following manner: unsigned int(l) switch_type_flag; if(!switch_type_flag) ⁇ unsigned int(32) switch_duration;
- the timeout value is indicated by the parameter switch_duration. If the switch_type_flag is equal to 1, it indicates that the switching duration should be immediate or as fast as possible.
- the viewpoint switching implementation for the player utilizes the t_max and t_min in the
- ViewpointSwitchingTimelineStruct ( ) to prefetch content. This may be further optimized by taking into account if the user orientation in the current viewpoint is within a predefined threshold or overlapping the viewpoint switch activation region. This will be applicable to non-viewport-locked overlays. Prefetching may be started when the pointer of the HMD is getting close to the switch region. For non-HMD consumption, the region displayed in conventional display is considered the current viewport orientation. The proximity threshold to viewpoint switch activation region may be signaled with the ViewpointSwitchTimelineStruct (). [0168] Another aspect relates to the operation of a player or a decoder upon receiving the above-described indication for controlling the viewpoint switch.
- the operation may include, as shown in Figure 8, receiving (800) at least one bitstream corresponding to at least a first encoded viewpoint representation from mutually related viewpoints comprising said first encoded viewpoint representation and at least a second encoded viewpoint representation of omnidirectional video media content; decoding (802) and rendering said first encoded viewpoint representation for playback; receiving (804), from or along the at least one bitstream, metadata comprising an indication for controlling a switch at least from the first viewpoint representation to the second viewpoint representation; and switching (806), in response to a playback action triggering a viewpoint switch, to decode and render said first encoded viewpoint representation for playback according to said indication.
- inventions relating to the encoding aspects may be implemented in an apparatus comprising: means for encoding omnidirectional video media content into at least a first viewpoint representation and a second viewpoint representation, said first and second viewpoint representation being associated with mutually related viewpoints; and means for encoding metadata, in or along a bitstream comprising at the encoded first viewpoint representation, said metadata comprising an indication for controlling a switch at least from the first viewpoint representation to the second viewpoint representation upon playback.
- the embodiments relating to the encoding aspects may likewise be implemented in an apparatus comprising at least one processor and at least one memory, said at least one memory stored with computer program code thereon, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform: encode omnidirectional video media content into at least a first viewpoint representation and a second viewpoint representation, said first and second viewpoint representation being associated with mutually related viewpoints; and encode metadata, in or along a bitstream comprising at the encoded first viewpoint representation, said metadata comprising an indication for controlling a switch at least from the first viewpoint representation to the second viewpoint representation upon playback.
- the embodiments relating to the decoding aspects may be implemented in an apparatus comprising means for receiving at least one bitstream corresponding to at least a first encoded viewpoint representation from mutually related viewpoints comprising said first encoded viewpoint representation and at least a second encoded viewpoint representation of omnidirectional video media content; means for decoding and means for rendering said first encoded viewpoint representation for playback; means for means for receiving, from or along the at least one bitstream, metadata comprising an indication for controlling a switch at least from the first viewpoint representation to the second viewpoint representation; and means for switching, in response to a playback action triggering a viewpoint switch, to decode and render said first encoded viewpoint representation for playback according to said indication.
- the embodiments relating to the decoding aspects may likewise be implemented in an apparatus comprising at least one processor and at least one memory, said at least one memory stored with computer program code thereon, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform: receive at least one bitstream corresponding to at least a first encoded viewpoint representation from mutually related viewpoints comprising said first encoded viewpoint representation and at least a second encoded viewpoint representation of omnidirectional video media content; decode and render said first encoded viewpoint representation for playback; receive, from or along the at least one bitstream, metadata comprising an indication for controlling a switch at least from the first viewpoint representation to the second viewpoint representation; and switch, in response to a playback action triggering a viewpoint switch, to decode and render said first encoded viewpoint representation for playback according to said indication.
- Such apparatuses may comprise e.g. the functional units disclosed in any of the Figures 1, 2, 3a and 3b for implementing the embodiments.
- the decoder should be interpreted to cover any operational unit capable to carry out the decoding operations, such as a player, a receiver, a gateway, a demultiplexer and/or a decoder.
- FIG. 9 is a graphical representation of an example multimedia communication system within which various embodiments may be implemented.
- a data source 1510 provides a source signal in an analog, uncompressed digital, or compressed digital format, or any combination of these formats.
- An encoder 1520 may include or be connected with a pre processing, such as data format conversion and/or filtering of the source signal.
- the encoder 1520 encodes the source signal into a coded media bitstream. It should be noted that a bitstream to be decoded may be received directly or indirectly from a remote device located within virtually any type of network. Additionally, the bitstream may be received from local hardware or software.
- the encoder 1520 may be capable of encoding more than one media type, such as audio and video, or more than one encoder 1520 may be required to code different media types of the source signal.
- the encoder 1520 may also get synthetically produced input, such as graphics and text, or it may be capable of producing coded bitstreams of synthetic media. In the following, only processing of one coded media bitstream of one media type is considered to simplify the description. It should be noted, however, that typically real-time broadcast services comprise several streams (typically at least one audio, video and text sub-titling stream). It should also be noted that the system may include many encoders, but in the figure only one encoder 1520 is represented to simplify the description without a lack of generality. It should be further understood that, although text and examples contained herein may specifically describe an encoding process, one skilled in the art would understand that the same concepts and principles also apply to the corresponding decoding process and vice versa.
- the coded media bitstream may be transferred to a storage 1530.
- the storage 1530 may comprise any type of mass memory to store the coded media bitstream.
- the format of the coded media bitstream in the storage 1530 may be an elementary self-contained bitstream format, or one or more coded media bitstreams may be encapsulated into a container file, or the coded media bitstream may be encapsulated into a Segment format suitable for DASH (or a similar streaming system) and stored as a sequence of Segments. If one or more media bitstreams are encapsulated in a container file, a file generator (not shown in the figure) may be used to store the one more media bitstreams in the file and create file format metadata, which may also be stored in the file.
- the encoder 1520 or the storage 1530 may comprise the file generator, or the file generator is operationally attached to either the encoder 1520 or the storage 1530.
- Some systems operate “live”, i.e. omit storage and transfer coded media bitstream from the encoder 1520 directly to the sender 1540.
- the coded media bitstream may then be transferred to the sender 1540, also referred to as the server, on a need basis.
- the format used in the transmission may be an elementary self-contained bitstream format, a packet stream format, a Segment format suitable for DASH (or a similar streaming system), or one or more coded media bitstreams may be encapsulated into a container file.
- the encoder 1520, the storage 1530, and the server 1540 may reside in the same physical device or they may be included in separate devices.
- the encoder 1520 and server 1540 may operate with live real-time content, in which case the coded media bitstream is typically not stored permanently, but rather buffered for small periods of time in the content encoder 1520 and/or in the server 1540 to smooth out variations in processing delay, transfer delay, and coded media bitrate.
- the server 1540 sends the coded media bitstream using a communication protocol stack.
- the stack may include but is not limited to one or more of Real-Time Transport Protocol (RTP), User Datagram Protocol (UDP), Hypertext Transfer Protocol (HTTP), Transmission Control Protocol (TCP), and Internet Protocol (IP).
- RTP Real-Time Transport Protocol
- UDP User Datagram Protocol
- HTTP Hypertext Transfer Protocol
- TCP Transmission Control Protocol
- IP Internet Protocol
- the server 1540 encapsulates the coded media bitstream into packets.
- RTP Real-Time Transport Protocol
- UDP User Datagram Protocol
- HTTP Hypertext Transfer Protocol
- TCP Transmission Control Protocol
- IP Internet Protocol
- the sender 1540 may comprise or be operationally attached to a "sending file parser" (not shown in the figure).
- a sending file parser locates appropriate parts of the coded media bitstream to be conveyed over the communication protocol.
- the sending file parser may also help in creating the correct format for the communication protocol, such as packet headers and payloads.
- the multimedia container file may contain encapsulation instructions, such as hint tracks in the ISOBMFF, for encapsulation of the at least one of the contained media bitstream on the communication protocol.
- the server 1540 may or may not be connected to a gateway 1550 through a communication network, which may e.g. be a combination of a CDN, the Internet and/or one or more access networks.
- the gateway may also or alternatively be referred to as a middle- box.
- the gateway may be an edge server (of a CDN) or a web proxy. It is noted that the system may generally comprise any number gateways or alike, but for the sake of simplicity, the following description only considers one gateway 1550.
- the gateway 1550 may perform different types of functions, such as translation of a packet stream according to one communication protocol stack to another communication protocol stack, merging and forking of data streams, and manipulation of data stream according to the downlink and/or receiver capabilities, such as controlling the bit rate of the forwarded stream according to prevailing downlink network conditions.
- the gateway 1550 may be a server entity in various embodiments.
- the system includes one or more receivers 1560, typically capable of receiving, de modulating, and de-capsulating the transmitted signal into a coded media bitstream.
- the coded media bitstream may be transferred to a recording storage 1570.
- the recording storage 1570 may comprise any type of mass memory to store the coded media bitstream.
- the recording storage 1570 may alternatively or additively comprise computation memory, such as random access memory.
- the format of the coded media bitstream in the recording storage 1570 may be an elementary self-contained bitstream format, or one or more coded media bitstreams may be encapsulated into a container file.
- a container file is typically used and the receiver 1560 comprises or is attached to a container file generator producing a container file from input streams.
- Some systems operate “live,” i.e. omit the recording storage 1570 and transfer coded media bitstream from the receiver 1560 directly to the decoder 1580. In some systems, only the most recent part of the recorded stream, e.g., the most recent 10-minute excerption of the recorded stream, is maintained in the recording storage 1570, while any earlier recorded data is discarded from the recording storage 1570.
- the coded media bitstream may be transferred from the recording storage 1570 to the decoder 1580. If there are many coded media bitstreams, such as an audio stream and a video stream, associated with each other and encapsulated into a container file or a single media bitstream is encapsulated in a container file e.g. for easier access, a file parser (not shown in the figure) is used to decapsulate each coded media bitstream from the container file.
- the recording storage 1570 or a decoder 1580 may comprise the file parser, or the file parser is attached to either recording storage 1570 or the decoder 1580. It should also be noted that the system may include many decoders, but here only one decoder 1570 is discussed to simplify the description without a lack of generality
- the coded media bitstream may be processed further by a decoder 1570, whose output is one or more uncompressed media streams.
- a Tenderer 1590 may reproduce the uncompressed media streams with a loudspeaker or a display, for example.
- the receiver 1560, recording storage 1570, decoder 1570, and Tenderer 1590 may reside in the same physical device or they may be included in separate devices.
- a sender 1540 and/or a gateway 1550 may be configured to perform switching between different representations e.g. for switching between different viewports of 360- degree video content, view switching, bitrate adaptation and/or fast start-up, and/or a sender 1540 and/or a gateway 1550 may be configured to select the transmitted representation(s). Switching between different representations may take place for multiple reasons, such as to respond to requests of the receiver 1560 or prevailing conditions, such as throughput, of the network over which the bitstream is conveyed. In other words, the receiver 1560 may initiate switching between representations.
- a request from the receiver can be, e.g., a request for a Segment or a Subsegment from a different representation than earlier, a request for a change of transmitted scalability layers and/or sub-layers, or a change of a rendering device having different capabilities compared to the previous one.
- a request for a Segment may be an HTTP GET request.
- a request for a Subsegment may be an HTTP GET request with a byte range.
- bitrate adjustment or bitrate adaptation may be used for example for providing so-called fast start-up in streaming services, where the bitrate of the transmitted stream is lower than the channel bitrate after starting or random-accessing the streaming in order to start playback immediately and to achieve a buffer occupancy level that tolerates occasional packet delays and/or retransmissions.
- Bitrate adaptation may include multiple representation or layer up-switching and representation or layer down-switching operations taking place in various orders.
- a decoder 1580 may be configured to perform switching between different representations e.g. for switching between different viewports of 360-degree video content, viewpoint switching, bitrate adaptation and/or fast start-up, and/or a decoder 1580 may be configured to select the transmitted representation(s). Switching between different representations may take place for multiple reasons, such as to achieve faster decoding operation or to adapt the transmitted bitstream, e.g. in terms of bitrate, to prevailing conditions, such as throughput, of the network over which the bitstream is conveyed.
- the decoder may comprise means for requesting at least one decoder reset picture of the second representation for carrying out bitrate adaptation between the first representation and a third representation.
- Faster decoding operation might be needed for example if the device including the decoder 1580 is multi-tasking and uses computing resources for other purposes than decoding the video bitstream.
- faster decoding operation might be needed when content is played back at a faster pace than the normal playback speed, e.g. twice or three times faster than conventional real-time playback rate.
- user equipment may comprise a video codec such as those described in embodiments of the invention above. It shall be appreciated that the term user equipment is intended to cover any suitable type of wireless user equipment, such as mobile telephones, portable data processing devices or portable web browsers.
- elements of a public land mobile network may also comprise video codecs as described above.
- the various embodiments of the invention may be implemented in hardware or special purpose circuits, software, logic or any combination thereof.
- some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto.
- firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto.
- While various aspects of the invention may be illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
- the embodiments of this invention may be implemented by computer software executable by a data processor of the mobile device, such as in the processor entity, or by hardware, or by a combination of software and hardware.
- any blocks of the logic flow as in the Figures may represent program steps, or interconnected logic circuits, blocks and functions, or a combination of program steps and logic circuits, blocks and functions.
- the software may be stored on such physical media as memory chips, or memory blocks implemented within the processor, magnetic media such as hard disk or floppy disks, and optical media such as for example DVD and the data variants thereof, CD.
- the memory may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as semiconductor-based memory devices, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory.
- the data processors may be of any type suitable to the local technical environment, and may include one or more of general purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs) and processors based on multi-core processor architecture, as non-limiting examples.
- Embodiments of the inventions may be practiced in various components such as integrated circuit modules.
- the design of integrated circuits is by and large a highly automated process.
- Complex and powerful software tools are available for converting a logic level design into a semiconductor circuit design ready to be etched and formed on a semiconductor substrate.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- Library & Information Science (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FI20205338 | 2020-04-02 | ||
PCT/FI2021/050192 WO2021198553A1 (en) | 2020-04-02 | 2021-03-17 | An apparatus, a method and a computer program for video coding and decoding |
Publications (2)
Publication Number | Publication Date |
---|---|
EP4128808A1 true EP4128808A1 (en) | 2023-02-08 |
EP4128808A4 EP4128808A4 (en) | 2024-05-15 |
Family
ID=77927944
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP21779094.8A Pending EP4128808A4 (en) | 2020-04-02 | 2021-03-17 | An apparatus, a method and a computer program for video coding and decoding |
Country Status (2)
Country | Link |
---|---|
EP (1) | EP4128808A4 (en) |
WO (1) | WO2021198553A1 (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112423108B (en) * | 2019-08-20 | 2023-06-30 | 中兴通讯股份有限公司 | Method and device for processing code stream, first terminal, second terminal and storage medium |
CN113949829B (en) * | 2021-10-15 | 2022-09-20 | 腾讯科技(深圳)有限公司 | Media file encapsulation and decapsulation method, device, equipment and storage medium |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019194434A1 (en) * | 2018-04-05 | 2019-10-10 | 엘지전자 주식회사 | Method and device for transceiving metadata for plurality of viewpoints |
CN112237005B (en) * | 2018-04-05 | 2023-11-07 | Vid拓展公司 | Viewpoint metadata for omni-directional video |
WO2019200227A1 (en) * | 2018-04-13 | 2019-10-17 | Futurewei Technologies, Inc. | Signaling spatial region correspondence between virtual reality viewpoints |
SG11202110312XA (en) * | 2019-03-20 | 2021-10-28 | Beijing Xiaomi Mobile Software Co Ltd | Method and device for transmitting viewpoint switching capabilities in a vr360 application |
-
2021
- 2021-03-17 WO PCT/FI2021/050192 patent/WO2021198553A1/en unknown
- 2021-03-17 EP EP21779094.8A patent/EP4128808A4/en active Pending
Also Published As
Publication number | Publication date |
---|---|
WO2021198553A1 (en) | 2021-10-07 |
EP4128808A4 (en) | 2024-05-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
RU2728904C1 (en) | Method and device for controlled selection of point of view and orientation of audiovisual content | |
KR102246002B1 (en) | Method, device, and computer program to improve streaming of virtual reality media content | |
US11094130B2 (en) | Method, an apparatus and a computer program product for video encoding and video decoding | |
US11943421B2 (en) | Method, an apparatus and a computer program product for virtual reality | |
TW201841512A (en) | Signaling important video information in network video streaming using mime type parameters | |
AU2017271981A1 (en) | Advanced signaling of a most-interested region in an image | |
KR20190009290A (en) | The area of most interest in the image | |
RU2767300C2 (en) | High-level transmission of service signals for video data of "fisheye" type | |
US11805303B2 (en) | Method and apparatus for storage and signaling of media segment sizes and priority ranks | |
EP3777137B1 (en) | Method and apparatus for signaling of viewing extents and viewing space for omnidirectional content | |
WO2020188142A1 (en) | Method and apparatus for grouping entities in media content | |
EP4128808A1 (en) | An apparatus, a method and a computer program for video coding and decoding | |
US11722751B2 (en) | Method, an apparatus and a computer program product for video encoding and video decoding | |
US12015805B2 (en) | Method, an apparatus and a computer program product for video streaming | |
EP3777219B1 (en) | Method and apparatus for signaling and storage of multiple viewpoints for omnidirectional audiovisual content |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20221102 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
A4 | Supplementary search report drawn up and despatched |
Effective date: 20240411 |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: H04N 19/597 20140101ALI20240405BHEP Ipc: H04N 21/6373 20110101ALI20240405BHEP Ipc: H04N 13/106 20180101ALI20240405BHEP Ipc: H04N 13/178 20180101ALI20240405BHEP Ipc: H04N 21/235 20110101ALI20240405BHEP Ipc: H04N 21/4728 20110101ALI20240405BHEP Ipc: H04N 21/6587 20110101AFI20240405BHEP |