Nothing Special   »   [go: up one dir, main page]

EP4128808A1 - An apparatus, a method and a computer program for video coding and decoding - Google Patents

An apparatus, a method and a computer program for video coding and decoding

Info

Publication number
EP4128808A1
EP4128808A1 EP21779094.8A EP21779094A EP4128808A1 EP 4128808 A1 EP4128808 A1 EP 4128808A1 EP 21779094 A EP21779094 A EP 21779094A EP 4128808 A1 EP4128808 A1 EP 4128808A1
Authority
EP
European Patent Office
Prior art keywords
viewpoint
representation
viewpoint representation
switch
encoded
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP21779094.8A
Other languages
German (de)
French (fr)
Other versions
EP4128808A4 (en
Inventor
Ari Hourunranta
Sujeet Mate
Miska Hannuksela
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nokia Technologies Oy
Original Assignee
Nokia Technologies Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Technologies Oy filed Critical Nokia Technologies Oy
Publication of EP4128808A1 publication Critical patent/EP4128808A1/en
Publication of EP4128808A4 publication Critical patent/EP4128808A4/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • H04N13/117Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/161Encoding, multiplexing or demultiplexing different image signal components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/172Processing image signals image signals comprising non-image signal components, e.g. headers or format information
    • H04N13/178Metadata, e.g. disparity information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/194Transmission of image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/21805Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4728End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for selecting a Region Of Interest [ROI], e.g. for requesting a higher resolution version of a selected region
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • H04N21/6587Control parameters, e.g. trick play commands, viewpoint selection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/383Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes

Definitions

  • the present invention relates to an apparatus, a method and a computer program for video coding and decoding.
  • the bitrate is aimed to be reduced e.g. such that the primary viewport (i.e., the current viewing orientation) is transmitted at the best quality/resolution, while the remaining of 360-degree video is transmitted at a lower quality/resolution.
  • the viewing orientation changes, e.g. when the user turns his/her head when viewing the content with a head-mounted display (HMD)
  • HMD head-mounted display
  • another version of the content needs to be streamed, matching the new viewing orientation. This typically involves a viewpoint switch from a first viewpoint to a second viewpoint.
  • Viewpoints may, for example, represent different viewing position to the same scene, or provide completely different scenes, e.g. in a virtual tourist tour.
  • a method comprises encoding omnidirectional video media content into at least a first viewpoint representation and a second viewpoint representation, said first and second viewpoint representation being associated with mutually related viewpoints; and encoding metadata, in or along a bitstream comprising at least the encoded first viewpoint representation, said metadata comprising an indication for controlling a switch at least from the first viewpoint representation to the second viewpoint representation upon playback.
  • An apparatus comprises means for encoding omnidirectional video media content into at least a first viewpoint representation and a second viewpoint representation, said first and second viewpoint representation being associated with mutually related viewpoints; and means for encoding metadata, in or along a bitstream comprising at least the encoded first viewpoint representation, said metadata comprising an indication for controlling a switch at least from the first viewpoint representation to the second viewpoint representation upon playback.
  • An apparatus comprises at least one processor and at least one memory, said at least one memory stored with computer program code thereon, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform: encode omnidirectional video media content into at least a first viewpoint representation and a second viewpoint representation, said first and second viewpoint representation being associated with mutually related viewpoints; and encode metadata, in or along a bitstream comprising at least the encoded first viewpoint representation, said metadata comprising an indication for controlling a switch at least from the first viewpoint representation to the second viewpoint representation upon playback.
  • said indication comprises at least one parameter indicating at least one of the following: - in response to a playback action triggering a viewpoint switch, the switch from the first viewpoint representation to the second viewpoint representation is to be started at a next available switch point;
  • the switch from the first viewpoint representation to the second viewpoint representation is to be delayed until the second viewpoint representation has been decoded ready for rendering.
  • said indication is configured to be encoded as a flag indicating whether the switch from the first viewpoint representation to the second viewpoint representation is to be started at the next available switch point or to be delayed until the second viewpoint representation has been decoded ready for rendering.
  • a signalling of said indication is configured to be carried out by at least one syntax element included in a ViewpointTimelineSwitchStruct syntax structure for ISO/IEC 23090-2.
  • a signalling of said indication is configured to be carried out by at least one syntax element included in a ViewpointSwitchingListStruct syntax structure for ISO/IEC 23090-2.
  • the apparatus further comprises means for encoding, responsive to said indication indicating that the switch from the first viewpoint representation to the second viewpoint representation is to be started at the next available switch point, a second parameter for indicating a timeout value for the switch from the first viewpoint representation to the second viewpoint representation to be completed.
  • a method comprises receiving at least one bitstream corresponding to at least a first encoded viewpoint representation from mutually related viewpoints comprising said first encoded viewpoint representation and at least a second encoded viewpoint representation of omnidirectional video media content; decoding and rendering said first encoded viewpoint representation for playback; receiving, from or along the at least one bitstream, metadata comprising an indication for controlling a switch at least from the first viewpoint representation to the second viewpoint representation; and switching, in response to a playback action triggering a viewpoint switch, to decode and render said first encoded viewpoint representation for playback according to said indication.
  • An apparatus comprises means for receiving at least one bitstream corresponding to at least a first encoded viewpoint representation from mutually related viewpoints comprising said first encoded viewpoint representation and at least a second encoded viewpoint representation of omnidirectional video media content; means for decoding and means for rendering said first encoded viewpoint representation for playback; means for receiving, from or along the at least one bitstream, metadata comprising an indication for controlling a switch at least from the first viewpoint representation to the second viewpoint representation; and means for switching, in response to a playback action triggering a viewpoint switch, to decode and render said first encoded viewpoint representation for playback according to said indication.
  • An apparatus comprises at least one processor and at least one memory, said at least one memory stored with computer program code thereon, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform: receive at least one bitstream corresponding to at least a first encoded viewpoint representation from mutually related viewpoints comprising said first encoded viewpoint representation and at least a second encoded viewpoint representation of omnidirectional video media content; decode and render said first encoded viewpoint representation for playback; receive, from or along the at least one bitstream, metadata comprising an indication for controlling a switch at least from the first viewpoint representation to the second viewpoint representation; and switch, in response to a playback action triggering a viewpoint switch, to decode and render said first encoded viewpoint representation for playback according to said indication.
  • the further aspects relate to apparatuses and computer readable storage media stored with code thereon, which are arranged to carry out the above methods and one or more of the embodiments related thereto.
  • Figure 1 shows schematically an electronic device employing embodiments of the invention
  • Figure 2 shows schematically a user equipment suitable for employing embodiments of the invention
  • Figures 3a and 3b show schematically an encoder and a decoder suitable for implementing embodiments of the invention
  • Figure 4 shows an example of MPEG Omnidirectional Media Format (OMAF) concept
  • Figures 5a and 5b show two alternative methods for packing 360-degree video content into 2D packed pictures for encoding
  • Figure 6 shows the process of forming a monoscopic equirectangular panorama picture.
  • Figure 7 shows a flow chart of an encoding method according to an embodiment of the invention.
  • Figure 8 shows a flow chart of a decoding method according to an embodiment of the invention.
  • Figure 9 shows a schematic diagram of an example multimedia communication system within which various embodiments may be implemented.
  • Figure 1 shows a block diagram of a video coding system according to an example embodiment as a schematic block diagram of an exemplary apparatus or electronic device 50, which may incorporate a codec according to an embodiment of the invention.
  • Figure 2 shows a layout of an apparatus according to an example embodiment. The elements of Figs. 1 and 2 will be explained next.
  • the electronic device 50 may for example be a mobile terminal or user equipment of a wireless communication system. However, it would be appreciated that embodiments of the invention may be implemented within any electronic device or apparatus which may require encoding and decoding or encoding or decoding video images.
  • the apparatus 50 may comprise a housing 30 for incorporating and protecting the device.
  • the apparatus 50 further may comprise a display 32 in the form of a liquid crystal display.
  • the display may be any suitable display technology suitable to display an image or video.
  • the apparatus 50 may further comprise a keypad 34.
  • any suitable data or user interface mechanism may be employed.
  • the user interface may be implemented as a virtual keyboard or data entry system as part of a touch-sensitive display.
  • the apparatus may comprise a microphone 36 or any suitable audio input which may be a digital or analogue signal input.
  • the apparatus 50 may further comprise an audio output device which in embodiments of the invention may be any one of: an earpiece 38, speaker, or an analogue audio or digital audio output connection.
  • the apparatus 50 may also comprise a battery (or in other embodiments of the invention the device may be powered by any suitable mobile energy device such as solar cell, fuel cell or clockwork generator).
  • the apparatus may further comprise a camera capable of recording or capturing images and/or video.
  • the apparatus 50 may further comprise an infrared port for short range line of sight communication to other devices. In other embodiments the apparatus 50 may further comprise any suitable short range communication solution such as for example a Bluetooth wireless connection or a USB/firewire wired connection.
  • the apparatus 50 may comprise a controller 56, processor or processor circuitry for controlling the apparatus 50.
  • the controller 56 may be connected to memory 58 which in embodiments of the invention may store both data in the form of image and audio data and/or may also store instructions for implementation on the controller 56.
  • the controller 56 may further be connected to codec circuitry 54 suitable for carrying out coding and decoding of audio and/or video data or assisting in coding and decoding carried out by the controller.
  • the apparatus 50 may further comprise a card reader 48 and a smart card 46, for example a UICC and UICC reader for providing user information and being suitable for providing authentication information for authentication and authorization of the user at a network.
  • the apparatus 50 may comprise radio interface circuitry 52 connected to the controller and suitable for generating wireless communication signals for example for communication with a cellular communications network, a wireless communications system or a wireless local area network.
  • the apparatus 50 may further comprise an antenna 44 connected to the radio interface circuitry 52 for transmitting radio frequency signals generated at the radio interface circuitry 52 to other apparatus(es) and for receiving radio frequency signals from other apparatus(es).
  • the apparatus 50 may comprise a camera capable of recording or detecting individual frames which are then passed to the codec 54 or the controller for processing.
  • the apparatus may receive the video image data for processing from another device prior to transmission and/or storage.
  • the apparatus 50 may also receive either wirelessly or by a wired connection the image for coding/decoding.
  • the structural elements of apparatus 50 described above represent examples of means for performing a corresponding function.
  • a video codec consists of an encoder that transforms the input video into a compressed representation suited for storage/transmission and a decoder that can uncompress the compressed video representation back into a viewable form.
  • a video encoder and/or a video decoder may also be separate from each other, i.e. need not form a codec.
  • encoder discards some information in the original video sequence in order to represent the video in a more compact form (that is, at lower bitrate).
  • Figures 3a and 3b show an encoder and decoder for encoding and decoding the 2D pictures.
  • a video codec consists of an encoder that transforms an input video into a compressed representation suited for storage/transmission and a decoder that can uncompress the compressed video representation back into a viewable form.
  • the encoder discards and/or loses some information in the original video sequence in order to represent the video in a more compact form (that is, at lower bitrate).
  • Figure 3a illustrates an image to be encoded (In); a predicted representation of an image block (P'n); a prediction error signal (Dn); a reconstructed prediction error signal (D'n); a preliminary reconstructed image (I'n); a final reconstructed image (R'n); a transform (T) and inverse transform (T-l); a quantization (Q) and inverse quantization (Q-l); entropy encoding (E); a reference frame memory (RFM); inter prediction (Pinter); intra prediction (Pintra); mode selection (MS) and filtering (F).
  • Figure 3b illustrates a predicted representation of an image block (P'n); a reconstructed prediction error signal (D'n); a preliminary reconstructed image (I'n); a final reconstructed image (R'n); an inverse transform (T-l); an inverse quantization (Q-l); an entropy decoding (E-l); a reference frame memory (RFM); a prediction (either inter or intra) (P); and filtering (F).
  • H.264/AVC encoders and High Efficiency Video Coding (H.265/HEVC a.k.a. HEVC) encoders encode the video information in two phases. Firstly pixel values in a certain picture area (or “block”) are predicted for example by motion compensation means (finding and indicating an area in one of the previously coded video frames that corresponds closely to the block being coded) or by spatial means (using the pixel values around the block to be coded in a specified manner). Secondly the prediction error, i.e. the difference between the predicted block of pixels and the original block of pixels, is coded. This is typically done by transforming the difference in pixel values using a specified transform (e.g.
  • Video codecs may also provide a transform skip mode, which the encoders may choose to use. In the transform skip mode, the prediction error is coded in a sample domain, for example by deriving a sample-wise difference value relative to certain adjacent samples and coding the sample-wise difference value with an entropy coder.
  • inter prediction In temporal prediction, the sources of prediction are previously decoded pictures (a.k.a. reference pictures).
  • IBC intra block copy
  • inter-block-copy prediction prediction is applied similarly to temporal prediction but the reference picture is the current picture and only previously decoded samples can be referred in the prediction process.
  • Inter layer or inter- view prediction may be applied similarly to temporal prediction, but the reference picture is a decoded picture from another scalable layer or from another view, respectively.
  • inter prediction may refer to temporal prediction only, while in other cases inter prediction may refer collectively to temporal prediction and any of intra block copy, inter-layer prediction, and inter- view prediction provided that they are performed with the same or similar process than temporal prediction.
  • Inter prediction or temporal prediction may sometimes be referred to as motion compensation or motion-compensated prediction.
  • Inter prediction which may also be referred to as temporal prediction, motion compensation, or motion-compensated prediction, reduces temporal redundancy.
  • inter prediction the sources of prediction are previously decoded pictures.
  • Intra prediction utilizes the fact that adjacent pixels within the same picture are likely to be correlated.
  • Intra prediction can be performed in spatial or transform domain, i.e., either sample values or transform coefficients can be predicted. Intra prediction is typically exploited in intra coding, where no inter prediction is applied.
  • motion information is indicated by motion vectors associated with each motion compensated image block.
  • Each of these motion vectors represents the displacement of the image block in the picture to be coded (in the encoder) or decoded (at the decoder) and the prediction source block in one of the previously coded or decoded images (or picture).
  • One outcome of the coding procedure is a set of coding parameters, such as motion vectors and quantized transform coefficients. Many parameters can be entropy-coded more efficiently if they are predicted first from spatially or temporally neighboring parameters. For example, a motion vector may be predicted from spatially adjacent motion vectors and only the difference relative to the motion vector predictor may be coded. Prediction of coding parameters and intra prediction may be collectively referred to as in-picture prediction.
  • ISO International Standards Organization
  • ISO International Standards Organization
  • MPEG Moving Picture Experts Group
  • MP4 Moving Picture Experts Group
  • HEVC High Efficiency Video Coding standard
  • ISOBMFF International Standards Organization (ISO) base media file format
  • MPEG Moving Picture Experts Group
  • HEVC High Efficiency Video Coding standard
  • ISOBMFF International Standards Organization (ISO) base media file format
  • MPEG Moving Picture Experts Group
  • HEVC High Efficiency Video Coding standard
  • HEVC High Efficiency Video Coding standard
  • Some concepts, structures, and specifications of ISOBMFF are described below as an example of a container file format, based on which the embodiments may be implemented.
  • the aspects of the invention are not limited to ISOBMFF, but rather the description is given for one possible basis on top of which the invention may be partly or fully realized.
  • a basic building block in the ISO base media file format is called a box.
  • Each box has a header and a payload.
  • the box header indicates the type of the box and the size of the box in terms of bytes.
  • a box may enclose other boxes, and the ISO file format specifies which box types are allowed within a box of a certain type. Furthermore, the presence of some boxes may be mandatory in each file, while the presence of other boxes may be optional. Additionally, for some box types, it may be allowable to have more than one box present in a file. Thus, the ISO base media file format may be considered to specify a hierarchical structure of boxes.
  • a file includes media data and metadata that are encapsulated into boxes. Each box is identified by a four character code (4CC) and starts with a header which informs about the type and size of the box.
  • 4CC four character code
  • the media data may be provided in one or more instances of MediaDataBox (‘mdat‘) and the MovieBox (‘moov’) may be used to enclose the metadata for timed media.
  • the ‘moov’ box may include one or more tracks, and each track may reside in one corresponding TrackBox (‘trak’).
  • Each track is associated with a handler, identified by a four-character code, specifying the track type.
  • Video, audio, and image sequence tracks can be collectively called media tracks, and they contain an elementary media stream.
  • Other track types comprise hint tracks and timed metadata tracks.
  • Tracks comprise samples, such as audio or video frames.
  • a media sample may correspond to a coded picture or an access unit.
  • a media track refers to samples (which may also be referred to as media samples) formatted according to a media compression format (and its encapsulation to the ISO base media file format).
  • a hint track refers to hint samples, containing cookbook instructions for constructing packets for transmission over an indicated communication protocol.
  • a timed metadata track may refer to samples describing referred media and/or hint samples.
  • a sample grouping in the ISO base media file format and its derivatives, such as the advanced video coding (AVC) file format and the scalable video coding (SVC) file format may be defined as an assignment of each sample in a track to be a member of one sample group, based on a grouping criterion.
  • a sample group in a sample grouping is not limited to being contiguous samples and may contain non-adjacent samples. As there may be more than one sample grouping for the samples in a track, each sample grouping may have a type field to indicate the type of grouping.
  • Sample groupings may be represented by two linked data structures: (1) a SampleToGroupBox (sbgp box) represents the assignment of samples to sample groups; and (2) a SampleGroupDescriptionBox (sgpd box) contains a sample group entry for each sample group describing the properties of the group. There may be multiple instances of the SampleToGroupBox and SampleGroupDescriptionBox based on different grouping criteria. These may be distinguished by a type field used to indicate the type of grouping.
  • SampleToGroupBox may comprise a grouping_type_parameter field that can be used e.g. to indicate a sub-type of the grouping.
  • an edit list provides a mapping between the presentation timeline and the media timeline.
  • an edit list provides for the linear offset of the presentation of samples in a track, provides for the indication of empty times and provides for a particular sample to be dwelled on for a certain period of time.
  • the presentation timeline may be accordingly modified to provide for looping, such as for the looping videos of the various regions of the scene.
  • an EditListBox may be contained in EditBox, which is contained in TrackBox ('trak').
  • flags specifies the repetition of the edit list.
  • setting a specific bit within the box flags (the least significant bit, i.e., flags & 1 in ANSI-C notation, where & indicates a bit-wise AND operation) equal to 0 specifies that the edit list is not repeated, while setting the specific bit (i.e., flags & 1 in ANSI- C notation) equal to 1 specifies that the edit list is repeated.
  • the values of box flags greater than 1 may be defined to be reserved for future extensions.
  • a Track group enables grouping of tracks based on certain characteristics or the tracks within a group have a particular relationship. Track grouping, however, does not allow any image items in the group.
  • TrackGroupBox in ISOBMFF is as follows: aligned (8) class TrackGroupBox extends Box('trgr') ⁇
  • track_group_type indicates the grouping_type and shall be set to one of the following values, or a value registered, or a value from a derived specification or registration:
  • 'msrc' indicates that this track belongs to a multi-source presentation.
  • the tracks that have the same value oftrack_group_id within a TrackGroupTypeBox of track_group_type 'msrc' are mapped as being originated from the same source.
  • a recording of a video telephony call may have both audio and video for both participants, and the value oftrack_group_id associated with the audio track and the video track of one participant differs from value oftrack_group_id associated with the tracks of the other participant.
  • the pair oftrack_group_id and track_group_type identifies a track group within the file.
  • the tracks that contain a particular TrackGroupTypeBox having the same value oftrack_group_id and track_group_type belong to the same track group.
  • the Entity grouping is similar to track grouping but enables grouping of both tracks and image items in the same group.
  • group_id is a non-negative integer assigned to the particular grouping that shall not be equal to any group_id value of any other EntityToGroupBox, any item_ID value of the hierarchy level (file, movie or track) that contains the GroupsListBox, or any track_ID value (when the GroupsListBox is contained in the file level).
  • num_entities_in_group specifies the number ofentity_id values mapped to this entity group.
  • entity_id is resolved to an item, when an item with item_ID equal to entity_id is present in the hierarchy level (file, movie or track) that contains the GroupsListBox, or to a track, when a track with track_ID equal to entity_id is present and the GroupsListBox is contained in the file level.
  • Files conforming to the ISOBMFF may contain any non-timed objects, referred to as items, meta items, or metadata items, in a meta box (a.k.a. Metabox, four-character code: ‘meta’). While the name of the meta box refers to metadata, items can generally contain metadata or media data.
  • the meta box may reside at the top level of the file, within a movie box (four-character code: ‘moov’), and within a track box (four-character code: ‘trak’), but at most one meta box may occur at each of the file level, movie level, or track level.
  • the meta box may be required to contain a ‘hdlr’ box indicating the structure or format of the ‘meta’ box contents.
  • the meta box may list and characterize any number of items that can be referred and each one of them can be associated with a file name and are uniquely identified with the file by item identifier (item id) which is an integer value.
  • the metadata items may be for example stored in the 'idaf box of the meta box or in an 'mdaf box or reside in a separate file. If the metadata is located external to the file then its location may be declared by the DatalnformationBox (four-character code: ‘dinf ).
  • the metadata may be encapsulated into either the XMLBox (four- character code: ‘xml ‘) or the BinaryXMLBox (four-character code: ‘bxmT).
  • An item may be stored as a contiguous byte range, or it may be stored in several extents, each being a contiguous byte range. In other words, items may be stored fragmented into extents, e.g. to enable interleaving.
  • An extent is a contiguous subset of the bytes of the resource. The resource can be formed by concatenating the extents.
  • the ItemPropertiesBox enables the association of any item with an ordered set of item properties. Item properties may be regarded as small data records.
  • the ItemPropertiesBox consists of two parts: ItemPropertyContainerBox that contains an implicitly indexed list of item properties, and one or more ItemPropertyAssociationBox(es) that associate items with item properties.
  • Hypertext Transfer Protocol has been widely used for the delivery of real time multimedia content over the Internet, such as in video streaming applications.
  • HTTP Hypertext Transfer Protocol
  • 3GPP 3rd Generation Partnership Project
  • PSS packet-switched streaming
  • MPEG took 3GPP AHS Release 9 as a starting point for the MPEG DASH standard (ISO/IEC 23009-1 : “Dynamic adaptive streaming over HTTP (DASH)-Part 1 : Media presentation description and segment formats,”).
  • MPEG DASH and 3GP-DASH are technically close to each other and may therefore be collectively referred to as DASH.
  • Some concepts, formats and operations of DASH are described below as an example of a video streaming system, wherein the embodiments may be implemented.
  • the aspects of the invention are not limited to the above standard documents but rather the description is given for one possible basis on top of which the invention may be partly or fully realized.
  • the multimedia content may be stored on an HTTP server and may be delivered using HTTP.
  • the content may be stored on the server in two parts: Media Presentation Description (MPD), which describes a manifest of the available content, its various alternatives, their URL addresses, and other characteristics; and segments, which contain the actual multimedia bitstreams in the form of chunks, in a single or multiple files.
  • MPD Media Presentation Description
  • the MDP provides the necessary information for clients to establish a dynamic adaptive streaming over HTTP.
  • the MPD contains information describing media presentation, such as an HTTP -uniform resource locator (URL) of each Segment to make GET Segment request.
  • URL HTTP -uniform resource locator
  • the DASH client may obtain the MPD e.g. by using HTTP, email, thumb drive, broadcast, or other transport methods.
  • the DASH client may become aware of the program timing, media-content availability, media types, resolutions, minimum and maximum bandwidths, and the existence of various encoded alternatives of multimedia components, accessibility features and required digital rights management (DRM), media-component locations on the network, and other content characteristics. Using this information, the DASH client may select the appropriate encoded alternative and start streaming the content by fetching the segments using e.g. HTTP GET requests. After appropriate buffering to allow for network throughput variations, the client may continue fetching the subsequent segments and also monitor the network bandwidth fluctuations. The client may decide how to adapt to the available bandwidth by fetching segments of different alternatives (with lower or higher bitrates) to maintain an adequate buffer.
  • DRM digital rights management
  • a media content component or a media component may be defined as one continuous component of the media content with an assigned media component type that can be encoded individually into a media stream.
  • Media content may be defined as one media content period or a contiguous sequence of media content periods.
  • Media content component type may be defined as a single type of media content such as audio, video, or text.
  • a media stream may be defined as an encoded version of a media content component.
  • a hierarchical data model is used to structure media presentation as follows.
  • a media presentation consists of a sequence of one or more Periods, each Period contains one or more Groups, each Group contains one or more Adaptation Sets, each Adaptation Sets contains one or more Representations, each Representation consists of one or more Segments.
  • a Group may be defined as a collection of Adaptation Sets that are not expected to be presented simultaneously.
  • An Adaptation Set may be defined as a set of interchangeable encoded versions of one or several media content components.
  • a Representation is one of the alternative choices of the media content or a subset thereof typically differing by the encoding choice, e.g. by bitrate, resolution, language, codec, etc.
  • the Segment contains certain duration of media data, and metadata to decode and present the included media content.
  • a Segment is identified by a URI and can typically be requested by a HTTP GET request.
  • a Segment may be defined as a unit of data associated with an HTTP- URL and optionally a byte range that are specified by an MPD.
  • the DASH MPD complies with Extensible Markup Language (XML) and is therefore specified through elements and attributes as defined in XML.
  • the MPD may be specified using the following conventions: Elements in an XML document may be identified by an upper-case first letter and may appear in bold face as Element. To express that an element Elementl is contained in another element Element2, one may write Element2.Elementl. If an element’s name consists of two or more combined words, camel casing may be used, such as ImportantElement, for example. Elements may be present either exactly once, or the minimum and maximum occurrence may be defined by ⁇ minOccurs> ... ⁇ maxOccurs>.
  • Attributes in an XML document may be identified by a lower-case first letter as well as they may be preceded by a ‘@’-sign, e.g. @attribute, for example.
  • a ‘@’-sign e.g. @attribute, for example.
  • Attributes may have assigned a status in the XML as mandatory (M), optional (O), optional with default value (OD) and conditionally mandatory (CM).
  • descriptor elements are typically structured in the same way, in that they contain a @schemeIdUri attribute that provides a URI to identify the scheme and an optional attribute @value and an optional attribute @id.
  • the semantics of the element are specific to the scheme employed.
  • the URI identifying the scheme may be a URN or a URL.
  • Some descriptors are specified in MPEG-DASH (ISO/IEC 23009-1), while descriptors can additionally or alternatively be specified in other specifications. When specified in specifications other than MPEG-DASH, the MPD does not provide any specific information on how to use descriptor elements. It is up to the application or specification that employs DASH formats to instantiate the description elements with appropriate scheme information.
  • Scheme Identifier in the form of a URI and the value space for the element when that Scheme Identifier is used.
  • the Scheme Identifier appears in the @schemeIdUri attribute.
  • a text string may be defined for each value and this string may be included in the @value attribute.
  • any extension element or attribute may be defined in a separate namespace.
  • the @id value may be used to refer to a unique descriptor or to a group of descriptors. In the latter case, descriptors with identical values for the attribute @id may be required to be synonymous, i.e.
  • equivalence may refer to lexical equivalence as defined in clause 5 of RFC 2141. If the @schemeIdUri is a URL, then equivalence may refer to equality on a character-for-character basis as defined in clause 6.2.1 of RFC3986. If the @value attribute is not present, equivalence may be determined by the equivalence for @schemeIdUri only. Attributes and element in extension namespaces might not be used for determining equivalence. The @id attribute may be ignored for equivalence determination.
  • MPEG-DASH specifies descriptors EssentialProperty and SupplementalProperty.
  • EssentialProperty the Media Presentation author expresses that the successful processing of the descriptor is essential to properly use the information in the parent element that contains this descriptor unless the element shares the same @id with another EssentialProperty element. If EssentialProperty elements share the same @id, then processing one of the EssentialProperty elements with the same value for @id is sufficient. At least one EssentialProperty element of each distinct @id value is expected to be processed. If the scheme or the value for an EssentialProperty descriptor is not recognized the DASH client is expected to ignore the parent element that contains the descriptor. Multiple EssentialProperty elements with the same value for @id and with different values for @id may be present in an MPD.
  • the Media Presentation author expresses that the descriptor contains supplemental information that may be used by the DASH client for optimized processing. If the scheme or the value for a SupplementalProperty descriptor is not recognized the DASH client is expected to ignore the descriptor. Multiple SupplementalProperty elements may be present in an MPD.
  • MPEG-DASH specifies a Viewpoint element that is formatted as a property descriptor.
  • the @schemeIdUri attribute of the Viewpoint element is used to identify the viewpoint scheme employed.
  • Adaptation Sets containing non-equivalent Viewpoint element values contain different media content components.
  • the Viewpoint elements may equally be applied to media content types that are not video.
  • Adaptation Sets with equivalent Viewpoint element values are intended to be presented together. This handling should be applied equally for recognized and unrecognized @schemeIdUri values.
  • An Initialization Segment may be defined as a Segment containing metadata that is necessary to present the media streams encapsulated in Media Segments.
  • an Initialization Segment may comprise the Movie Box ('moov') which might not include metadata for any samples, i.e. any metadata for samples is provided in 'moof boxes.
  • a Media Segment contains certain duration of media data for playback at a normal speed, such duration is referred as Media Segment duration or Segment duration.
  • the content producer or service provider may select the Segment duration according to the desired characteristics of the service. For example, a relatively short Segment duration may be used in a live service to achieve a short end-to-end latency. The reason is that Segment duration is typically a lower bound on the end-to-end latency perceived by a DASH client since a Segment is a discrete unit of generating media data for DASH. Content generation is typically done such a manner that a whole Segment of media data is made available for a server. Furthermore, many client implementations use a Segment as the unit for GET requests.
  • a Segment can be requested by a DASH client only when the whole duration of Media Segment is available as well as encoded and encapsulated into a Segment.
  • different strategies of selecting Segment duration may be used.
  • a Segment may be further partitioned into Subsegments to enable downloading segments in multiple parts, for example.
  • Subsegments may be required to contain complete access units.
  • Subsegments may be indexed by Segment Index box, which contains information to map presentation time range and byte range for each Subsegment.
  • the Segment Index box may also describe subsegments and stream access points in the segment by signaling their durations and byte offsets.
  • a DASH client may use the information obtained from Segment Index box(es) to make a HTTP GET request for a specific Subsegment using byte range HTTP request. If a relatively long Segment duration is used, then Subsegments may be used to keep the size of HTTP responses reasonable and flexible for bitrate adaptation.
  • the indexing information of a segment may be put in the single box at the beginning of that segment or spread among many indexing boxes in the segment.
  • Different methods of spreading are possible, such as hierarchical, daisy chain, and hybrid, for example. This technique may avoid adding a large box at the beginning of the segment and therefore may prevent a possible initial download delay.
  • Sub-Representations are embedded in regular Representations and are described by the SubRepresentation element.
  • SubRepresentation elements are contained in a Representation element.
  • the SubRepresentation element describes properties of one or several media content components that are embedded in the Representation. It may for example describe the exact properties of an embedded audio component (such as codec, sampling rate, etc., for example), an embedded sub-title (such as codec, for example) or it may describe some embedded lower quality video layer (such as some lower frame rate, or otherwise, for example).
  • Sub-Representations and Representation share some common attributes and elements. In case the @level attribute is present in the SubRepresentation element, the following applies:
  • Sub-Representations provide the ability for accessing a lower quality version of the Representation in which they are contained.
  • Sub-Representations for example allow extracting the audio track in a multiplexed Representation or may allow for efficient fast-forward or rewind operations if provided with lower frame rate;
  • the Initialization Segment and/or the Media Segments and/or the Index Segments shall provide sufficient information such that the data can be easily accessed through HTTP partial GET requests. The details on providing such information are defined by the media format in use.
  • the Initialization Segment contains the Fevel Assignment box.
  • the Subsegment Index box (‘ssix’) is present for each Subsegment.
  • the attribute @level specifies the level to which the described Sub-Representation is associated to in the Subsegment Index.
  • the information in Representation, Sub- Representation and in the Fevel Assignment (‘leva’) box contains information on the assignment of media data to levels.
  • Media data should have an order such that each level provides an enhancement compared to the lower levels.
  • Level Assignment box When the Level Assignment box is present, it applies to all movie fragments subsequent to the initial movie.
  • a fraction is defined to consist of one or more Movie Fragment boxes and the associated Media Data boxes, possibly including only an initial part of the last Media Data Box.
  • data for each level appears contiguously.
  • Data for levels within a fraction appears in increasing order of level value. All data in a fraction is assigned to levels.
  • the Level Assignment box provides a mapping from features, such as scalability layers or temporal sub-layers, to levels.
  • a feature can be specified through a track, a sub-track within a track, or a sample grouping of a track.
  • the Temporal Level sample grouping may be used to indicate a mapping of the pictures to temporal levels, which are equivalent to temporal sub-layers in HEVC. That is, HEVC pictures of a certain Temporalld value may be mapped to a particular temporal level using the Temporal Level sample grouping (and the same can be repeated for all Temporalld values).
  • the Level Assignment box can then refer to the Temporal Level sample grouping in the indicated mapping to levels.
  • the Subsegment Index box (’ssix’) provides a mapping from levels (as specified by the Level Assignment box) to byte ranges of the indexed subsegment.
  • this box provides a compact index for how the data in a subsegment is ordered according to levels into partial subsegments. It enables a client to easily access data for partial subsegments by downloading ranges of data in the subsegment.
  • each byte in the subsegment is assigned to a level. If the range is not associated with any information in the level assignment, then any level that is not included in the level assignment may be used.
  • Subsegment Index boxes present per each Segment Index box that indexes only leaf subsegments, i.e. that only indexes subsegments but no segment indexes.
  • a Subsegment Index box if any, is the next box after the associated Segment Index box.
  • a Subsegment Index box documents the subsegment that is indicated in the immediately preceding Segment Index box.
  • Each level may be assigned to exactly one partial subsegment, i.e. byte ranges for one level are contiguous.
  • Levels of partial subsegments are assigned by increasing numbers within a subsegment, i.e., samples of a partial subsegment may depend on any samples of preceding partial subsegments in the same subsegment, but not the other way around. For example, each partial subsegment contains samples having an identical temporal sub-layer and partial subsegments appear in increasing temporal sub-layer order within the subsegment.
  • the final Media Data box may be incomplete, that is, less data is accessed than the length indication of the Media Data Box indicates is present.
  • the length of the Media Data box may need adjusting, or padding may be used.
  • the padding flag in the Level Assignment Box indicates whether this missing data can be replaced by zeros. If not, the sample data for samples assigned to levels that are not accessed is not present, and care should be taken.
  • Virtual reality is a rapidly developing area of technology in which image or video content, sometimes accompanied by audio, is provided to a user device such as a user headset (a.k.a. head-mounted display, HMD).
  • a user headset a.k.a. head-mounted display, HMD
  • the user device may be provided with a live or stored feed from a content source, the feed representing a virtual space for immersive output through the user device.
  • immersive multimedia such as omnidirectional content consumption, is more complex to encode and decode for the end user. This is due to the higher degree of freedom available to the end user.
  • 3DoF three degrees of freedom
  • Omnidirectional may refer to media content that has greater spatial extent than a field-of-view of a device rendering the content.
  • Omnidirectional content may for example cover substantially 360 degrees in the horizontal dimension and substantially 180 degrees in the vertical dimension, but omnidirectional may also refer to content covering less than 360 degree view in the horizontal direction and/or 180 degree view in the vertical direction.
  • VR video may sometimes be used interchangeably. They may generally refer to video content that provides such a large field of view that only a part of the video is displayed at a single point of time in typical displaying arrangements.
  • VR video may be viewed on a head-mounted display (HMD) that may be capable of displaying e.g. about 100-degree field of view.
  • the spatial subset of the VR video content to be displayed may be selected based on the orientation of the HMD.
  • a typical flat-panel viewing environment is assumed, wherein e.g. up to 40- degree field-of-view may be displayed.
  • wide-FOV content e.g.
  • MPEG Omnidirectional Media Format (ISO/IEC 23090-2) is a virtual reality (VR) system standard.
  • OMAF defines a media format (comprising both file format derived from ISOBMFF and streaming formats for DASH and MPEG Media Transport).
  • OMAF version 1 supports 360° video, images, and audio, as well as the associated timed text and facilitates three degrees of freedom (3DoF) content consumption, meaning that a viewport can be selected with any azimuth and elevation range and tilt angle that are covered by the omnidirectional content but the content is not adapted to any translational changes of the viewing position.
  • 3DoF degrees of freedom
  • OMAF v2 is planned to include features like support for multiple viewpoints, overlays, sub-picture compositions, and six degrees of freedom with a viewing space limited roughly to upper-body movements only.
  • a viewpoint may be defined as the point or space from which the user views the scene; it usually corresponds to a camera position. Slight head motion does not imply a different viewpoint.
  • observation point or Viewpoint refers to a volume in a three-dimensional space for virtual reality audio/video acquisition or playback.
  • a Viewpoint is trajectory, such as a circle, a region, or a volume, around the centre point of a device or rig used for omnidirectional audio/video acquisition and the position of the observer's head in the three-dimensional space in which the audio and video tracks are located.
  • an observer's head position is tracked and the rendering is adjusted for head movements in addition to head rotations, and then a Viewpoint may be understood to be an initial or reference position of the observer's head.
  • each observation point may be defined as a viewpoint by a viewpoint property descriptor.
  • the definition may be stored in ISOBMFF or OMAF type of file format.
  • the delivery could be HFS (HTTP Five Streaming), RTSP/RTP (Real Time Streaming Protocol/Real-time Transport Protocol) streaming in addition to DASH.
  • the term “spatially related Viewpoint group” refers to Viewpoints which have content that has a spatial relationship between them. For example, content captured by VR cameras at different locations in the same basketball court or a music concert captured from different locations on the stage.
  • the term “logically related Viewpoint group” refers to related Viewpoints which do not have a clear spatial relationship but are logically related. The relative position of logically related Viewpoints is described based on the creative intent. For example, two Viewpoints that are members of a logically related Viewpoint group may correspond to content from the performance area and the dressing room. Another example could be two Viewpoints from the dressing rooms of the two competing teams that form a logically related Viewpoint group to permit users to traverse between both teams to see the player reactions.
  • Viewpoints qualifying according to above definitions of the spatially related Viewpoint group and logically related Viewpoint group may be commonly referred to as mutually related Viewpoints, sometimes also as a mutually related Viewpoint group.
  • random access may refer to the ability of a decoder to start decoding a stream at a point other than the beginning of the stream and recover an exact or approximate reconstructed media signal, such as a representation of the decoded pictures.
  • a random access point and a recovery point may be used to characterize a random access operation.
  • a random access point may be defined as a location in a media stream, such as an access unit or a coded picture within a video bitstream, where decoding can be initiated.
  • a recovery point may be defined as a first location in a media stream or within the reconstructed signal characterized in that all media, such as decoded pictures, at or subsequent to a recovery point in output order are correct or approximately correct in content, when the decoding has started from the respective random access point. If the random access point is the same as the recovery point, the random access operation is instantaneous; otherwise, it may be gradual.
  • Random access points enable, for example, seek, fast forward play, and fast backward play operations in locally stored media streams as well as in media streaming.
  • servers can respond to seek requests by transmitting data starting from the random access point that is closest to (and in many cases preceding) the requested destination of the seek operation and/or decoders can start decoding from the random access point that is closest to (and in many cases preceding) the requested destination of the seek operation.
  • Switching between coded streams of different bit-rates is a method that is used commonly in unicast streaming to match the transmitted bitrate to the expected network throughput and to avoid congestion in the network. Switching to another stream is possible at a random access point.
  • random access points enable tuning in to a broadcast or multicast.
  • a random access point can be coded as a response to a scene cut in the source sequence or as a response to an intra picture update request.
  • MPEG Omnidirectional Media Format is described in the following by referring to Figure 4.
  • a real-world audio-visual scene (A) is captured by audio sensors as well as a set of cameras or a camera device with multiple lenses and sensors.
  • the acquisition results in a set of digital image/video (Bi) and audio (Ba) signals.
  • the cameras/lenses typically cover all directions around the center point of the camera set or camera device, thus the name of 360-degree video.
  • Audio can be captured using many different microphone configurations and stored as several different content formats, including channel-based signals, static or dynamic (i.e. moving through the 3D scene) object signals, and scene-based signals (e.g., Higher Order Ambisonics).
  • the channel-based signals typically conform to one of the loudspeaker layouts defined in CICP.
  • the loudspeaker layout signals of the rendered immersive audio program are binaraulized for presentation via headphones.
  • the images (Bi) of the same time instance are stitched, projected, and mapped onto a packed picture (D).
  • Input images (Bi) are stitched and projected onto a three-dimensional projection structure that may for example be a unit sphere.
  • the projection structure may be considered to comprise one or more surfaces, such as plane(s) or part(s) thereof.
  • a projection structure may be defined as three-dimensional structure consisting of one or more surface(s) on which the captured VR image/video content is projected, and from which a respective projected picture can be formed.
  • the image data on the projection structure is further arranged onto a two-dimensional projected picture (C).
  • projection may be defined as a process by which a set of input images are projected onto a projected frame.
  • representation formats including for example an equirectangular projection (ERP) format and a cube map projection (CMP) format. It may be considered that the projected picture covers the entire sphere.
  • region- wise packing is then applied to map the projected picture onto a packed picture. If the region-wise packing is not applied, the packed picture is identical to the projected picture, and this picture is given as input to image/video encoding. Otherwise, regions of the projected picture are mapped onto a packed picture (D) by indicating the location, shape, and size of each region in the packed picture, and the packed picture (D) is given as input to image/video encoding.
  • region-wise packing may be defined as a process by which a projected picture is mapped to a packed picture.
  • packed picture may be defined as a picture that results from region- wise packing of a projected picture.
  • Input images (Bi) are stitched and projected onto two three-dimensional projection structures, one for each eye.
  • the image data on each projection structure is further arranged onto a two-dimensional projected picture (CL for left eye, CR for right eye), which covers the entire sphere.
  • Frame packing is applied to pack the left view picture and right view picture onto the same projected picture.
  • region- wise packing is then applied to the pack projected picture onto a packed picture, and the packed picture (D) is given as input to image/video encoding. If the region-wise packing is not applied, the packed picture is identical to the projected picture, and this picture is given as input to image/video encoding.
  • the image stitching, projection, and region-wise packing process can be carried out multiple times for the same source images to create different versions of the same content, e.g. for different orientations of the projection structure.
  • the region- wise packing process can be performed multiple times from the same projected picture to create more than one sequence of packed pictures to be encoded.
  • 360-degree panoramic content i.e., images and video
  • the vertical field-of-view may vary and can be e.g. 180 degrees.
  • Panoramic image covering 360-degree field-of-view horizontally and 180-degree field-of-view vertically can be represented by a sphere that can be mapped to a bounding cylinder that can be cut vertically to form a 2D picture (this type of projection is known as equirectangular projection).
  • This type of projection is known as equirectangular projection.
  • the process of forming a monoscopic equirectangular panorama picture is illustrated in Figure 6.
  • a set of input images such as fisheye images of a camera array or a camera device with multiple lenses and sensors, is stitched onto a spherical image.
  • the spherical image is further projected onto a cylinder (without the top and bottom faces).
  • the cylinder is unfolded to form a two- dimensional projected frame.
  • one or more of the presented steps may be merged; for example, the input images may be directly projected onto a cylinder without an intermediate projection onto a sphere.
  • the projection structure for equirectangular panorama may be considered to be a cylinder that comprises a single surface.
  • 360-degree content can be mapped onto different types of solid geometrical structures, such as polyhedron (i.e. a three-dimensional solid object containing flat polygonal faces, straight edges and sharp comers or vertices, e.g., a cube or a pyramid), cylinder (by projecting a spherical image onto the cylinder, as described above with the equirectangular projection), cylinder (directly without projecting onto a sphere first), cone, etc. and then unwrapped to a two-dimensional image plane.
  • polyhedron i.e. a three-dimensional solid object containing flat polygonal faces, straight edges and sharp comers or vertices, e.g., a cube or a pyramid
  • cylinder by projecting a spherical image onto the cylinder, as described above with the equirectangular projection
  • cylinder directly without projecting onto a sphere first
  • cone etc. and then unwrapped to a two-dimensional image plane.
  • panoramic content with 360-degree horizontal field-of-view but with less than 180-degree vertical field-of-view may be considered special cases of panoramic projection, where the polar areas of the sphere have not been mapped onto the two- dimensional image plane.
  • a panoramic image may have less than 360-degree horizontal field-of-view and up to 180-degree vertical field-of-view, while otherwise has the characteristics of panoramic projection format.
  • OMAF allows the omission of image stitching, projection, and region- wise packing and encode the image/video data in their captured format.
  • images D are considered the same as images Bi and a limited number of fisheye images per time instance are encoded.
  • the stitched images (D) are encoded as coded images (Ei) or a coded video bitstream (Ev).
  • the captured audio (Ba) is encoded as an audio bitstream (Ea).
  • the coded images, video, and/or audio are then composed into a media file for file playback (F) or a sequence of an initialization segment and media segments for streaming (Fs), according to a particular media container file format.
  • the media container file format is the ISO base media file format.
  • the file encapsulator also includes metadata into the file or the segments, such as projection and region- wise packing information assisting in rendering the decoded packed pictures.
  • the metadata in the file may include:
  • the file that the file encapsulator outputs (F) is identical to the file that the file decapsulator inputs (F').
  • a file decapsulator processes the file (F') or the received segments (F’s) and extracts the coded bitstreams (E’a, EV, and/or E’i) and parses the metadata.
  • the audio, video, and/or images are then decoded into decoded signals (B'a for audio, and D' for images/video).
  • the decoded packed pictures (D') are projected onto the screen of a head- mounted display or any other display device based on the current viewing orientation or viewport and the projection, spherical coverage, projection structure orientation, and region- wise packing metadata parsed from the file.
  • decoded audio (B'a) is rendered, e.g. through headphones, according to the current viewing orientation.
  • the current viewing orientation is determined by the head tracking and possibly also eye tracking functionality. Besides being used by the renderer to render the appropriate part of decoded video and audio signals, the current viewing orientation may also be used the video and audio decoders for decoding optimization.
  • HHFoV horizontal and vertical FoVs
  • DHFoV horizontal and vertical directions
  • a video rendered by an application on a HMD renders a portion of the 360 degrees video. This portion is defined here as viewport.
  • a viewport may be defined as a region of omnidirectional image or video suitable for display and viewing by the user.
  • a current viewport (which may be sometimes referred simply as a viewport) may be defined as the part of the spherical video that is currently displayed and hence is viewable by the user(s).
  • a video rendered by an application on a head-mounted display renders a portion of the 360-degrees video, which is referred to as a viewport.
  • a viewport is a window on the 360- degree world represented in the omnidirectional video displayed via a rendering display.
  • a viewport may be characterized by a horizontal field-of-view (VHFoV) and a vertical field-of- view (VVFoV).
  • VHFoV horizontal field-of-view
  • VVFoV vertical field-of- view
  • the horizontal field-of-view of the viewport will be abbreviated with HFoV and, respectively, the vertical field-of-view of the viewport will be abbreviated with VFoV.
  • a sphere region may be defined as a region on a sphere that may be specified by four great circles or by two azimuth circles and two elevation circles and additionally by a tile angle indicating rotation along the axis originating from the sphere origin passing through the center point of the sphere region.
  • a great circle may be defined as an intersection of the sphere and a plane that passes through the center point of the sphere.
  • a great circle is also known as an orthodrome or Riemannian circle.
  • An azimuth circle may be defined as a circle on the sphere connecting all points with the same azimuth value.
  • An elevation circle may be defined as a circle on the sphere connecting all points with the same elevation value.
  • the coordinate system of OMAF consists of a unit sphere and three coordinate axes, namely the X (back-to-front) axis, the Y (lateral, side-to-side) axis, and the Z (vertical, up) axis, where the three axes cross at the centre of the sphere.
  • the location of a point on the sphere is identified by a pair of sphere coordinates azimuth (f) and elevation (Q).
  • OMAF Omnidirectional Media Format
  • OMAF version 2 introduces new concepts such as Viewpoints and specifies ISOBMFF metadata and DASH MPD signaling for Viewpoints.
  • OMAF version 2 allows Viewpoints to be static, e.g., Viewpoints may be captured by 360° video cameras at fixed positions. Moreover, OMAF version 2 allows Viewpoints to be dynamic, e.g., a Viewpoint may be captured by a 360° video camera mounted on a flying drone. Metadata and signaling for both static and dynamic Viewpoints are supported in OMAF v2. [0121 ] OMAF version 2 enables the switching between mutually related Viewpoints to be seamless in the sense that after switching the user still sees the same object, e.g., the same player in a sport game, just from a different viewing angle.
  • viewpoint group is defined in OMAF version 2 and may comprise mutually related Viewpoints. However, when Viewpoints are not mutually related, switching between the two Viewpoints may incur a noticeable cut or transition.
  • OMAF v2 specifies the viewpoint entity grouping.
  • Other possible file format mechanisms for this purpose in addition to or instead of entity grouping include a track group.
  • metadata for viewpoint is signaled, to provide an identifier (ID) of the Viewpoint and a set of other information that can be used to assist streaming of the content and switching between different Viewpoints.
  • ID identifier
  • Such information may include:
  • a (textual) label for annotation of the viewpoint.
  • ⁇ Mapping of the viewpoint to a viewpoint group of an indicated viewpoint group ID This information provides a means to indicate whether the switching between two particular viewpoints can be seamless, and if not, the client does not need to bother trying it.
  • Viewpoint position relative to the common reference coordinate system shared by all viewpoints of a viewpoint group is to enable a good user experience during viewpoint switching, provided that the client can properly utilize the positions of the two viewpoints involved in the switching in its rendering processing.
  • Rotation information for conversion between the global coordinate system of the viewpoint relative to the common reference coordinate system is
  • rotation information for conversion between the common reference coordinate system and the compass points such as the geomagnetic north.
  • the GPS position of the viewpoint which enables the client application positioning of a viewpoint into a real-world map, which can be user-friendly in certain scenarios.
  • viewpoint switching information which provides a number of switching transitions possible from the current viewpoint, and for each of these, information such as the destination viewpoint, the viewport to view after switching, the presentation time to start the playing back the destination viewpoint, and a recommended transition effect during switching (such as zoom-in, walk though, fade- to-black, or mirroring).
  • viewpoint looping information indicating which time period of the presentation is looped and a maximum count how many times the time period is looped.
  • the looping feature can be used for requesting end-user's input for initiating viewpoint switching.
  • the above information may be stored in timed metadata track(s) that may be time-synchronized with the media track(s).
  • ViewpointPosStruct() ⁇ signed int(32) viewpoint_pos_x; signed int(32) viewpoint_pos_y; signed int(32) viewpoint_pos_z;
  • class ViewpointGpsPositionStruct() ⁇ signed int(32) viewpoint_gpspos_longitude; signed int(32) viewpoint_gpspos_latitude; signed int(32) viewpoint_gpspos_altitude;
  • a Viewpoint element with a @schemeIdUri attribute equal to "um:mpeg:mpegl:omaf:2018:vwpt" is referred to as a viewpoint information (VWPT) descriptor.
  • VWPT viewpoint information
  • the @value specifies the viewpoint ID of the viewpoint.
  • the ViewPointlnfo is a Container element whose sub-elements and attributes provide information about the viewpoint.
  • the ViewPointInfo@label attribute specifies a string that provides human readable label for the viewpoint.
  • Position attributes of this element specify the position information for the viewpoint.
  • Viewpoints may, for example, represent different viewing position to the same scene, or provide completely different scenes, e.g. in a virtual tourist tour.
  • viewpoints which are part of the same group or viewpoints with common visual scene.
  • Viewpoints may be used to realize alternative storylines.
  • Several options for user- originated Viewpoint switching may be indicated and associated with different user interactions, such as different selectable regions for activating a switch to a particular Viewpoint.
  • Viewpoint looping may be indicated and used e.g. for waiting end-user's choice between switching options.
  • Viewpoint switching in DASH streaming involves switching DASH adaptation sets. Network delay and bandwidth availability may lead to latency in delivery of the content corresponding to the destination viewpoint. Hence, the switch is hardly ever immediate, but either paused video (last decoded frame from the original viewpoint) or black frames may be shown to user:
  • players with a single decoder can start decoding the destination viewpoint only after stopping the decoding of the current viewpoint. This can create a gap especially since hierarchical video coding structures delay the output from the decoder; the decoder may be able to output frames only after it has processed e.g. 8 frames.
  • the existing OMAF v2 specification draft includes signaling for a possible transition effect to take place when switching between viewpoints. Some of the transition effects, however, require having decoded content available from both viewpoints, whereas some require additional decoding resources, hence making them more difficult to use in resource-constrained systems. Hence, it may be expected that switching without any transition effect is most common way to switch viewpoints.
  • the ViewpointTimelineSwitchStruct contains the parameters t_min and t_max, which set the limits when the switch is enabled. E.g. if there is a viewpoint that is active only for 30 seconds, starting from 1 minute from the content start, then t_min is set as 60 sec, and t_max as 90 sec.
  • the offsets (absolute and relative), on the other hand, specify the position in the destination viewpoint timeline where the playback must start. E.g. if there is a viewpoint that can be used to show again past events, then the offset is set accordingly.
  • there is no signaling or hardcoded specification what should happen with the current viewpoint data during switching, i.e. whether the player should continue playing it or not. In practice, there is typically a delay in the order of hundreds of milliseconds to a few seconds before the playback of the new viewpoint can start due to network latencies etc.
  • the OMAF v2 specification allows also automated viewpoint switching to take place when a specified number (1 or more) of loops for the viewpoint content has been played, or if recommended viewport for multiple viewpoints timed metadata track is used.
  • players should be able to prepare for the switch by prefetching media for the next viewpoint.
  • a viewpoint switch may come as a surprise to the player, similar to user interaction.
  • the method according to an aspect comprises encoding (700) omnidirectional video media content into at least a first viewpoint representation and a second viewpoint representation, said first and second viewpoint representation being associated with mutually related viewpoints; and encoding (700) metadata, in or along a bitstream comprising at the encoded first viewpoint representation, said metadata comprising an indication for controlling a switch at least from the first viewpoint representation to the second viewpoint representation upon playback.
  • the content author may provide an indication for controlling the viewpoint switch e.g. from the first viewpoint representation to the second viewpoint representation in a manner considered desirable by the content author. Knowing the type of content in the first and the second viewpoint representation associated with the mutually related viewpoints, the content author may have the best knowledge of what kind of viewpoint switch would be preferable between the first and the second viewpoint representations. For ensuring a satisfying user experience, the indication provided by the content author for controlling the viewpoint switch may provide the best results.
  • said indication comprises at least one parameter indicating at least one of the following: - in response to a playback action triggering a viewpoint switch, the switch from the first viewpoint representation to the second viewpoint representation is to be started at a next available switch point;
  • the switch from the first viewpoint representation to the second viewpoint representation is to be delayed until the second viewpoint representation has been decoded ready for rendering.
  • the content author may provide at least two options for controlling the viewpoint switch: 1) switch to the second viewpoint immediately, or at least as soon as possible, i.e. at the next available switch point, or 2) continue playback of the first viewpoint until the second viewpoint is ready to be rendered.
  • the content of the second viewpoint may not be ready for rendering, and a black screen or a transition effect may be displayed while waiting the second viewpoint to be ready for rendering.
  • an indication about a viewpoint switch being in progress may be displayed.
  • the playback action triggering the viewpoint switch may relate at least to a user interaction, such as a user of a HMD turning and/or tilting head to a new viewport or a signalled request from the user to switch another viewpoint.
  • the playback action triggering the viewpoint switch may also relate to an error situation in the playback, e.g. the receiving, decoding and/or rendering of the first viewpoint representation is interrupted for some reason, and this triggers the viewpoint switch to the second viewpoint representation.
  • said indication is configured to be encoded as a flag indicating whether the switch from the first viewpoint representation to the second viewpoint representation is to be started at the next available switch point or to be delayed until the second viewpoint representation has been decoded ready for rendering.
  • a flag which may be referred to herein as switch_type_f lag, is used for indicating which one of the two options for controlling the viewpoint switch should be applied upon a playback action triggering a viewpoint switch.
  • a signalling of said indication is configured to be carried out by at least one syntax element included in a ViewpointTimelineSwitchStruct syntax structure or any other suitable syntax structure for ISO/IEC 23090-2 (or similar omnidirectional media coding technology).
  • switch_type_flag 1
  • the player should respond to the user interaction (or signaled request) immediately and switch to the new viewpoint immediately (or as soon as possible).
  • the switch may take place with a period of black screen or via a transition effect, if a transition effect is configured for the switch.
  • the player should try to ensure video and audio content continuity over the switch and continue playing the current viewpoint until playable content is available for the destination viewpoint.
  • the player may show some indication that the switch is in progress.
  • a signalling of said indication is configured to be carried out by at least one syntax element included in a VWPT descriptor, e.g. within its ViewPointlnfo element or any other suitable syntax structure for ISO/IEC 23090-2 (or similar omnidirectional media coding technology).
  • audio may be common between the viewpoints, and it can continue uninterrupted over the viewpoint switch.
  • a player concludes whether audio is common between the viewpoints. For example, when the same audio track is included in a first viewpoint entity group and a second viewpoint entity group, representing viewpoints between which the viewpoint switching takes place, the player may conclude that the audio is common between the viewpoints. Otherwise, the player may conclude that the audio is not common between the viewpoints. When the audio is concluded to be common between the viewpoints, the player continues the audio decoding and playback in an uninterrupted manner.
  • the desired player behavior may be signaled as a transition effect. Especially, along with more detailed player implementation signaling the player behavior as a transition effect may be expected to be applicable to any viewpoint switch.
  • a signalling of said indication is configured to be carried out by at least one syntax element included in a ViewpointSwitchingListStruct syntax structure or any other suitable syntax structure for ISO/IEC 23090-2 (or similar omnidirectional media coding technology).
  • the switch_type_flag is included in the ViewpointSwitchingListStruct ( ) , if the timeline_switch_offset_flag is equal to 0.
  • a second parameter may be encoded in the metadata for indicating a timeout value for the switch from the first viewpoint representation to the second viewpoint representation to be completed.
  • the signaling could be enhanced by a timeout value, or as a combination of the indication (e.g. the flag) and an additional parameter for the timeout value.
  • the timeout value may indicate the time period in which the player is expected to complete the viewpoint switch. This may require the player to choose a bandwidth representation of the destination viewport that meets the recent network conditions, if available.
  • the timeout value may be incorporated in the WiewpointTimelineSwitchStruct ( ) or ViewpointSwitchingListStruct ( ) in the following manner: unsigned int(l) switch_type_flag; if(!switch_type_flag) ⁇ unsigned int(32) switch_duration;
  • the timeout value is indicated by the parameter switch_duration. If the switch_type_flag is equal to 1, it indicates that the switching duration should be immediate or as fast as possible.
  • the viewpoint switching implementation for the player utilizes the t_max and t_min in the
  • ViewpointSwitchingTimelineStruct ( ) to prefetch content. This may be further optimized by taking into account if the user orientation in the current viewpoint is within a predefined threshold or overlapping the viewpoint switch activation region. This will be applicable to non-viewport-locked overlays. Prefetching may be started when the pointer of the HMD is getting close to the switch region. For non-HMD consumption, the region displayed in conventional display is considered the current viewport orientation. The proximity threshold to viewpoint switch activation region may be signaled with the ViewpointSwitchTimelineStruct (). [0168] Another aspect relates to the operation of a player or a decoder upon receiving the above-described indication for controlling the viewpoint switch.
  • the operation may include, as shown in Figure 8, receiving (800) at least one bitstream corresponding to at least a first encoded viewpoint representation from mutually related viewpoints comprising said first encoded viewpoint representation and at least a second encoded viewpoint representation of omnidirectional video media content; decoding (802) and rendering said first encoded viewpoint representation for playback; receiving (804), from or along the at least one bitstream, metadata comprising an indication for controlling a switch at least from the first viewpoint representation to the second viewpoint representation; and switching (806), in response to a playback action triggering a viewpoint switch, to decode and render said first encoded viewpoint representation for playback according to said indication.
  • inventions relating to the encoding aspects may be implemented in an apparatus comprising: means for encoding omnidirectional video media content into at least a first viewpoint representation and a second viewpoint representation, said first and second viewpoint representation being associated with mutually related viewpoints; and means for encoding metadata, in or along a bitstream comprising at the encoded first viewpoint representation, said metadata comprising an indication for controlling a switch at least from the first viewpoint representation to the second viewpoint representation upon playback.
  • the embodiments relating to the encoding aspects may likewise be implemented in an apparatus comprising at least one processor and at least one memory, said at least one memory stored with computer program code thereon, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform: encode omnidirectional video media content into at least a first viewpoint representation and a second viewpoint representation, said first and second viewpoint representation being associated with mutually related viewpoints; and encode metadata, in or along a bitstream comprising at the encoded first viewpoint representation, said metadata comprising an indication for controlling a switch at least from the first viewpoint representation to the second viewpoint representation upon playback.
  • the embodiments relating to the decoding aspects may be implemented in an apparatus comprising means for receiving at least one bitstream corresponding to at least a first encoded viewpoint representation from mutually related viewpoints comprising said first encoded viewpoint representation and at least a second encoded viewpoint representation of omnidirectional video media content; means for decoding and means for rendering said first encoded viewpoint representation for playback; means for means for receiving, from or along the at least one bitstream, metadata comprising an indication for controlling a switch at least from the first viewpoint representation to the second viewpoint representation; and means for switching, in response to a playback action triggering a viewpoint switch, to decode and render said first encoded viewpoint representation for playback according to said indication.
  • the embodiments relating to the decoding aspects may likewise be implemented in an apparatus comprising at least one processor and at least one memory, said at least one memory stored with computer program code thereon, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform: receive at least one bitstream corresponding to at least a first encoded viewpoint representation from mutually related viewpoints comprising said first encoded viewpoint representation and at least a second encoded viewpoint representation of omnidirectional video media content; decode and render said first encoded viewpoint representation for playback; receive, from or along the at least one bitstream, metadata comprising an indication for controlling a switch at least from the first viewpoint representation to the second viewpoint representation; and switch, in response to a playback action triggering a viewpoint switch, to decode and render said first encoded viewpoint representation for playback according to said indication.
  • Such apparatuses may comprise e.g. the functional units disclosed in any of the Figures 1, 2, 3a and 3b for implementing the embodiments.
  • the decoder should be interpreted to cover any operational unit capable to carry out the decoding operations, such as a player, a receiver, a gateway, a demultiplexer and/or a decoder.
  • FIG. 9 is a graphical representation of an example multimedia communication system within which various embodiments may be implemented.
  • a data source 1510 provides a source signal in an analog, uncompressed digital, or compressed digital format, or any combination of these formats.
  • An encoder 1520 may include or be connected with a pre processing, such as data format conversion and/or filtering of the source signal.
  • the encoder 1520 encodes the source signal into a coded media bitstream. It should be noted that a bitstream to be decoded may be received directly or indirectly from a remote device located within virtually any type of network. Additionally, the bitstream may be received from local hardware or software.
  • the encoder 1520 may be capable of encoding more than one media type, such as audio and video, or more than one encoder 1520 may be required to code different media types of the source signal.
  • the encoder 1520 may also get synthetically produced input, such as graphics and text, or it may be capable of producing coded bitstreams of synthetic media. In the following, only processing of one coded media bitstream of one media type is considered to simplify the description. It should be noted, however, that typically real-time broadcast services comprise several streams (typically at least one audio, video and text sub-titling stream). It should also be noted that the system may include many encoders, but in the figure only one encoder 1520 is represented to simplify the description without a lack of generality. It should be further understood that, although text and examples contained herein may specifically describe an encoding process, one skilled in the art would understand that the same concepts and principles also apply to the corresponding decoding process and vice versa.
  • the coded media bitstream may be transferred to a storage 1530.
  • the storage 1530 may comprise any type of mass memory to store the coded media bitstream.
  • the format of the coded media bitstream in the storage 1530 may be an elementary self-contained bitstream format, or one or more coded media bitstreams may be encapsulated into a container file, or the coded media bitstream may be encapsulated into a Segment format suitable for DASH (or a similar streaming system) and stored as a sequence of Segments. If one or more media bitstreams are encapsulated in a container file, a file generator (not shown in the figure) may be used to store the one more media bitstreams in the file and create file format metadata, which may also be stored in the file.
  • the encoder 1520 or the storage 1530 may comprise the file generator, or the file generator is operationally attached to either the encoder 1520 or the storage 1530.
  • Some systems operate “live”, i.e. omit storage and transfer coded media bitstream from the encoder 1520 directly to the sender 1540.
  • the coded media bitstream may then be transferred to the sender 1540, also referred to as the server, on a need basis.
  • the format used in the transmission may be an elementary self-contained bitstream format, a packet stream format, a Segment format suitable for DASH (or a similar streaming system), or one or more coded media bitstreams may be encapsulated into a container file.
  • the encoder 1520, the storage 1530, and the server 1540 may reside in the same physical device or they may be included in separate devices.
  • the encoder 1520 and server 1540 may operate with live real-time content, in which case the coded media bitstream is typically not stored permanently, but rather buffered for small periods of time in the content encoder 1520 and/or in the server 1540 to smooth out variations in processing delay, transfer delay, and coded media bitrate.
  • the server 1540 sends the coded media bitstream using a communication protocol stack.
  • the stack may include but is not limited to one or more of Real-Time Transport Protocol (RTP), User Datagram Protocol (UDP), Hypertext Transfer Protocol (HTTP), Transmission Control Protocol (TCP), and Internet Protocol (IP).
  • RTP Real-Time Transport Protocol
  • UDP User Datagram Protocol
  • HTTP Hypertext Transfer Protocol
  • TCP Transmission Control Protocol
  • IP Internet Protocol
  • the server 1540 encapsulates the coded media bitstream into packets.
  • RTP Real-Time Transport Protocol
  • UDP User Datagram Protocol
  • HTTP Hypertext Transfer Protocol
  • TCP Transmission Control Protocol
  • IP Internet Protocol
  • the sender 1540 may comprise or be operationally attached to a "sending file parser" (not shown in the figure).
  • a sending file parser locates appropriate parts of the coded media bitstream to be conveyed over the communication protocol.
  • the sending file parser may also help in creating the correct format for the communication protocol, such as packet headers and payloads.
  • the multimedia container file may contain encapsulation instructions, such as hint tracks in the ISOBMFF, for encapsulation of the at least one of the contained media bitstream on the communication protocol.
  • the server 1540 may or may not be connected to a gateway 1550 through a communication network, which may e.g. be a combination of a CDN, the Internet and/or one or more access networks.
  • the gateway may also or alternatively be referred to as a middle- box.
  • the gateway may be an edge server (of a CDN) or a web proxy. It is noted that the system may generally comprise any number gateways or alike, but for the sake of simplicity, the following description only considers one gateway 1550.
  • the gateway 1550 may perform different types of functions, such as translation of a packet stream according to one communication protocol stack to another communication protocol stack, merging and forking of data streams, and manipulation of data stream according to the downlink and/or receiver capabilities, such as controlling the bit rate of the forwarded stream according to prevailing downlink network conditions.
  • the gateway 1550 may be a server entity in various embodiments.
  • the system includes one or more receivers 1560, typically capable of receiving, de modulating, and de-capsulating the transmitted signal into a coded media bitstream.
  • the coded media bitstream may be transferred to a recording storage 1570.
  • the recording storage 1570 may comprise any type of mass memory to store the coded media bitstream.
  • the recording storage 1570 may alternatively or additively comprise computation memory, such as random access memory.
  • the format of the coded media bitstream in the recording storage 1570 may be an elementary self-contained bitstream format, or one or more coded media bitstreams may be encapsulated into a container file.
  • a container file is typically used and the receiver 1560 comprises or is attached to a container file generator producing a container file from input streams.
  • Some systems operate “live,” i.e. omit the recording storage 1570 and transfer coded media bitstream from the receiver 1560 directly to the decoder 1580. In some systems, only the most recent part of the recorded stream, e.g., the most recent 10-minute excerption of the recorded stream, is maintained in the recording storage 1570, while any earlier recorded data is discarded from the recording storage 1570.
  • the coded media bitstream may be transferred from the recording storage 1570 to the decoder 1580. If there are many coded media bitstreams, such as an audio stream and a video stream, associated with each other and encapsulated into a container file or a single media bitstream is encapsulated in a container file e.g. for easier access, a file parser (not shown in the figure) is used to decapsulate each coded media bitstream from the container file.
  • the recording storage 1570 or a decoder 1580 may comprise the file parser, or the file parser is attached to either recording storage 1570 or the decoder 1580. It should also be noted that the system may include many decoders, but here only one decoder 1570 is discussed to simplify the description without a lack of generality
  • the coded media bitstream may be processed further by a decoder 1570, whose output is one or more uncompressed media streams.
  • a Tenderer 1590 may reproduce the uncompressed media streams with a loudspeaker or a display, for example.
  • the receiver 1560, recording storage 1570, decoder 1570, and Tenderer 1590 may reside in the same physical device or they may be included in separate devices.
  • a sender 1540 and/or a gateway 1550 may be configured to perform switching between different representations e.g. for switching between different viewports of 360- degree video content, view switching, bitrate adaptation and/or fast start-up, and/or a sender 1540 and/or a gateway 1550 may be configured to select the transmitted representation(s). Switching between different representations may take place for multiple reasons, such as to respond to requests of the receiver 1560 or prevailing conditions, such as throughput, of the network over which the bitstream is conveyed. In other words, the receiver 1560 may initiate switching between representations.
  • a request from the receiver can be, e.g., a request for a Segment or a Subsegment from a different representation than earlier, a request for a change of transmitted scalability layers and/or sub-layers, or a change of a rendering device having different capabilities compared to the previous one.
  • a request for a Segment may be an HTTP GET request.
  • a request for a Subsegment may be an HTTP GET request with a byte range.
  • bitrate adjustment or bitrate adaptation may be used for example for providing so-called fast start-up in streaming services, where the bitrate of the transmitted stream is lower than the channel bitrate after starting or random-accessing the streaming in order to start playback immediately and to achieve a buffer occupancy level that tolerates occasional packet delays and/or retransmissions.
  • Bitrate adaptation may include multiple representation or layer up-switching and representation or layer down-switching operations taking place in various orders.
  • a decoder 1580 may be configured to perform switching between different representations e.g. for switching between different viewports of 360-degree video content, viewpoint switching, bitrate adaptation and/or fast start-up, and/or a decoder 1580 may be configured to select the transmitted representation(s). Switching between different representations may take place for multiple reasons, such as to achieve faster decoding operation or to adapt the transmitted bitstream, e.g. in terms of bitrate, to prevailing conditions, such as throughput, of the network over which the bitstream is conveyed.
  • the decoder may comprise means for requesting at least one decoder reset picture of the second representation for carrying out bitrate adaptation between the first representation and a third representation.
  • Faster decoding operation might be needed for example if the device including the decoder 1580 is multi-tasking and uses computing resources for other purposes than decoding the video bitstream.
  • faster decoding operation might be needed when content is played back at a faster pace than the normal playback speed, e.g. twice or three times faster than conventional real-time playback rate.
  • user equipment may comprise a video codec such as those described in embodiments of the invention above. It shall be appreciated that the term user equipment is intended to cover any suitable type of wireless user equipment, such as mobile telephones, portable data processing devices or portable web browsers.
  • elements of a public land mobile network may also comprise video codecs as described above.
  • the various embodiments of the invention may be implemented in hardware or special purpose circuits, software, logic or any combination thereof.
  • some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto.
  • firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto.
  • While various aspects of the invention may be illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
  • the embodiments of this invention may be implemented by computer software executable by a data processor of the mobile device, such as in the processor entity, or by hardware, or by a combination of software and hardware.
  • any blocks of the logic flow as in the Figures may represent program steps, or interconnected logic circuits, blocks and functions, or a combination of program steps and logic circuits, blocks and functions.
  • the software may be stored on such physical media as memory chips, or memory blocks implemented within the processor, magnetic media such as hard disk or floppy disks, and optical media such as for example DVD and the data variants thereof, CD.
  • the memory may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as semiconductor-based memory devices, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory.
  • the data processors may be of any type suitable to the local technical environment, and may include one or more of general purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs) and processors based on multi-core processor architecture, as non-limiting examples.
  • Embodiments of the inventions may be practiced in various components such as integrated circuit modules.
  • the design of integrated circuits is by and large a highly automated process.
  • Complex and powerful software tools are available for converting a logic level design into a semiconductor circuit design ready to be etched and formed on a semiconductor substrate.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Library & Information Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A method comprising: encoding omnidirectional video media content into at least a first viewpoint representation and a second viewpoint representation, said first and second viewpoint representation being associated with mutually related viewpoints (700); and encoding metadata, in or along a bitstream comprising at least the encoded first viewpoint representation, said metadata comprising an indication for controlling a switch at least from the first viewpoint representation to the second viewpoint representation upon playback (702).

Description

AN APPARATUS, A METHOD AND A COMPUTER PROGRAM FOR VIDEO
CODING AND DECODING
TECHNICAE FIEED
[0001 1 The present invention relates to an apparatus, a method and a computer program for video coding and decoding.
BACKGROUND
[0002] Recently, the development of various multimedia streaming applications, especially 360-degree video or virtual reality (VR) applications, has advanced with big steps. In viewport-adaptive streaming, the bitrate is aimed to be reduced e.g. such that the primary viewport (i.e., the current viewing orientation) is transmitted at the best quality/resolution, while the remaining of 360-degree video is transmitted at a lower quality/resolution. When the viewing orientation changes, e.g. when the user turns his/her head when viewing the content with a head-mounted display (HMD), another version of the content needs to be streamed, matching the new viewing orientation. This typically involves a viewpoint switch from a first viewpoint to a second viewpoint.
[0003] Viewpoints may, for example, represent different viewing position to the same scene, or provide completely different scenes, e.g. in a virtual tourist tour. There can be different expectations for user experience when switching from one viewpoint to another, either expecting quick response by stopping the playback of the current viewpoint immediately and switching to the new viewpoint as soon as possible or ensuring content continuity over the switch and hence keep playing the current viewpoint as long as needed. [0004] Consequently, there is a risk of inconsistent behavior of a playback unit, resulting unsatisfying user experience, if the playback unit, without further guidance, chooses to delay the switching instances to ensure continuous playback (e.g., if content is available for current viewpoint but not yet for destination viewpoint) or chooses to switch immediately to a new viewpoint with the associated risk of not having the content available to playback from the next playout sample.
SUMMARY
[0005] Now, an improved method and technical equipment implementing the method has been invented, by which the above problems are alleviated. Various aspects include methods, apparatuses and a computer readable medium comprising a computer program, or a signal stored therein, which are characterized by what is stated in the independent claims. Various details of the embodiments are disclosed in the dependent claims and in the corresponding images and description.
[0006] The scope of protection sought for various embodiments of the invention is set out by the independent claims. The embodiments and features, if any, described in this specification that do not fall under the scope of the independent claims are to be interpreted as examples useful for understanding various embodiments of the invention.
[0007] A method according to a first aspect comprises encoding omnidirectional video media content into at least a first viewpoint representation and a second viewpoint representation, said first and second viewpoint representation being associated with mutually related viewpoints; and encoding metadata, in or along a bitstream comprising at least the encoded first viewpoint representation, said metadata comprising an indication for controlling a switch at least from the first viewpoint representation to the second viewpoint representation upon playback.
[0008] An apparatus according to a second aspect comprises means for encoding omnidirectional video media content into at least a first viewpoint representation and a second viewpoint representation, said first and second viewpoint representation being associated with mutually related viewpoints; and means for encoding metadata, in or along a bitstream comprising at least the encoded first viewpoint representation, said metadata comprising an indication for controlling a switch at least from the first viewpoint representation to the second viewpoint representation upon playback.
[0009] An apparatus according to a third aspect comprises at least one processor and at least one memory, said at least one memory stored with computer program code thereon, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform: encode omnidirectional video media content into at least a first viewpoint representation and a second viewpoint representation, said first and second viewpoint representation being associated with mutually related viewpoints; and encode metadata, in or along a bitstream comprising at least the encoded first viewpoint representation, said metadata comprising an indication for controlling a switch at least from the first viewpoint representation to the second viewpoint representation upon playback.
[0010] According to an embodiment, said indication comprises at least one parameter indicating at least one of the following: - in response to a playback action triggering a viewpoint switch, the switch from the first viewpoint representation to the second viewpoint representation is to be started at a next available switch point;
- in response to a playback action triggering a viewpoint switch, the switch from the first viewpoint representation to the second viewpoint representation is to be delayed until the second viewpoint representation has been decoded ready for rendering.
[00113 According to an embodiment, said indication is configured to be encoded as a flag indicating whether the switch from the first viewpoint representation to the second viewpoint representation is to be started at the next available switch point or to be delayed until the second viewpoint representation has been decoded ready for rendering.
[0012] According to an embodiment, a signalling of said indication is configured to be carried out by at least one syntax element included in a ViewpointTimelineSwitchStruct syntax structure for ISO/IEC 23090-2.
[0013] According to an embodiment, a signalling of said indication is configured to be carried out by at least one syntax element included in a ViewpointSwitchingListStruct syntax structure for ISO/IEC 23090-2.
[0014] According to an embodiment, the apparatus further comprises means for encoding, responsive to said indication indicating that the switch from the first viewpoint representation to the second viewpoint representation is to be started at the next available switch point, a second parameter for indicating a timeout value for the switch from the first viewpoint representation to the second viewpoint representation to be completed.
[0015 ] A method according to a fourth aspect comprises receiving at least one bitstream corresponding to at least a first encoded viewpoint representation from mutually related viewpoints comprising said first encoded viewpoint representation and at least a second encoded viewpoint representation of omnidirectional video media content; decoding and rendering said first encoded viewpoint representation for playback; receiving, from or along the at least one bitstream, metadata comprising an indication for controlling a switch at least from the first viewpoint representation to the second viewpoint representation; and switching, in response to a playback action triggering a viewpoint switch, to decode and render said first encoded viewpoint representation for playback according to said indication.
[0016] An apparatus according to a fifth aspect comprises means for receiving at least one bitstream corresponding to at least a first encoded viewpoint representation from mutually related viewpoints comprising said first encoded viewpoint representation and at least a second encoded viewpoint representation of omnidirectional video media content; means for decoding and means for rendering said first encoded viewpoint representation for playback; means for receiving, from or along the at least one bitstream, metadata comprising an indication for controlling a switch at least from the first viewpoint representation to the second viewpoint representation; and means for switching, in response to a playback action triggering a viewpoint switch, to decode and render said first encoded viewpoint representation for playback according to said indication.
[00173 An apparatus according to a sixth aspect comprises at least one processor and at least one memory, said at least one memory stored with computer program code thereon, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform: receive at least one bitstream corresponding to at least a first encoded viewpoint representation from mutually related viewpoints comprising said first encoded viewpoint representation and at least a second encoded viewpoint representation of omnidirectional video media content; decode and render said first encoded viewpoint representation for playback; receive, from or along the at least one bitstream, metadata comprising an indication for controlling a switch at least from the first viewpoint representation to the second viewpoint representation; and switch, in response to a playback action triggering a viewpoint switch, to decode and render said first encoded viewpoint representation for playback according to said indication.
[0018] The further aspects relate to apparatuses and computer readable storage media stored with code thereon, which are arranged to carry out the above methods and one or more of the embodiments related thereto.
BRIEF DESCRIPTION OF THE DRAWINGS
[0019] For better understanding of the present invention, reference will now be made by way of example to the accompanying drawings in which:
[0020] Figure 1 shows schematically an electronic device employing embodiments of the invention;
[0021] Figure 2 shows schematically a user equipment suitable for employing embodiments of the invention;
[0022] Figures 3a and 3b show schematically an encoder and a decoder suitable for implementing embodiments of the invention;
[0023] Figure 4 shows an example of MPEG Omnidirectional Media Format (OMAF) concept; [0024] Figures 5a and 5b show two alternative methods for packing 360-degree video content into 2D packed pictures for encoding;
[0025 ] Figure 6 shows the process of forming a monoscopic equirectangular panorama picture.
[0026] Figure 7 shows a flow chart of an encoding method according to an embodiment of the invention;
[0027] Figure 8 shows a flow chart of a decoding method according to an embodiment of the invention; and
[0028] Figure 9 shows a schematic diagram of an example multimedia communication system within which various embodiments may be implemented.
DETAILED DESCRIPTON OF SOME EXAMPLE EMBODIMENTS
[0029] The following describes in further detail suitable apparatus and possible mechanisms for viewpoint switching. In this regard reference is first made to Figures 1 and 2, where Figure 1 shows a block diagram of a video coding system according to an example embodiment as a schematic block diagram of an exemplary apparatus or electronic device 50, which may incorporate a codec according to an embodiment of the invention. Figure 2 shows a layout of an apparatus according to an example embodiment. The elements of Figs. 1 and 2 will be explained next.
[0030] The electronic device 50 may for example be a mobile terminal or user equipment of a wireless communication system. However, it would be appreciated that embodiments of the invention may be implemented within any electronic device or apparatus which may require encoding and decoding or encoding or decoding video images.
[0031] The apparatus 50 may comprise a housing 30 for incorporating and protecting the device. The apparatus 50 further may comprise a display 32 in the form of a liquid crystal display. In other embodiments of the invention the display may be any suitable display technology suitable to display an image or video. The apparatus 50 may further comprise a keypad 34. In other embodiments of the invention any suitable data or user interface mechanism may be employed. For example the user interface may be implemented as a virtual keyboard or data entry system as part of a touch-sensitive display.
[0032] The apparatus may comprise a microphone 36 or any suitable audio input which may be a digital or analogue signal input. The apparatus 50 may further comprise an audio output device which in embodiments of the invention may be any one of: an earpiece 38, speaker, or an analogue audio or digital audio output connection. The apparatus 50 may also comprise a battery (or in other embodiments of the invention the device may be powered by any suitable mobile energy device such as solar cell, fuel cell or clockwork generator). The apparatus may further comprise a camera capable of recording or capturing images and/or video. The apparatus 50 may further comprise an infrared port for short range line of sight communication to other devices. In other embodiments the apparatus 50 may further comprise any suitable short range communication solution such as for example a Bluetooth wireless connection or a USB/firewire wired connection.
[0033] The apparatus 50 may comprise a controller 56, processor or processor circuitry for controlling the apparatus 50. The controller 56 may be connected to memory 58 which in embodiments of the invention may store both data in the form of image and audio data and/or may also store instructions for implementation on the controller 56. The controller 56 may further be connected to codec circuitry 54 suitable for carrying out coding and decoding of audio and/or video data or assisting in coding and decoding carried out by the controller. [0034] The apparatus 50 may further comprise a card reader 48 and a smart card 46, for example a UICC and UICC reader for providing user information and being suitable for providing authentication information for authentication and authorization of the user at a network.
[0035] The apparatus 50 may comprise radio interface circuitry 52 connected to the controller and suitable for generating wireless communication signals for example for communication with a cellular communications network, a wireless communications system or a wireless local area network. The apparatus 50 may further comprise an antenna 44 connected to the radio interface circuitry 52 for transmitting radio frequency signals generated at the radio interface circuitry 52 to other apparatus(es) and for receiving radio frequency signals from other apparatus(es).
[0036] The apparatus 50 may comprise a camera capable of recording or detecting individual frames which are then passed to the codec 54 or the controller for processing. The apparatus may receive the video image data for processing from another device prior to transmission and/or storage. The apparatus 50 may also receive either wirelessly or by a wired connection the image for coding/decoding. The structural elements of apparatus 50 described above represent examples of means for performing a corresponding function.
[0037] A video codec consists of an encoder that transforms the input video into a compressed representation suited for storage/transmission and a decoder that can uncompress the compressed video representation back into a viewable form. A video encoder and/or a video decoder may also be separate from each other, i.e. need not form a codec. Typically encoder discards some information in the original video sequence in order to represent the video in a more compact form (that is, at lower bitrate).
[0038] Figures 3a and 3b show an encoder and decoder for encoding and decoding the 2D pictures. A video codec consists of an encoder that transforms an input video into a compressed representation suited for storage/transmission and a decoder that can uncompress the compressed video representation back into a viewable form. Typically, the encoder discards and/or loses some information in the original video sequence in order to represent the video in a more compact form (that is, at lower bitrate).
[0039] An example of an encoding process is illustrated in Figure 3a. Figure 3a illustrates an image to be encoded (In); a predicted representation of an image block (P'n); a prediction error signal (Dn); a reconstructed prediction error signal (D'n); a preliminary reconstructed image (I'n); a final reconstructed image (R'n); a transform (T) and inverse transform (T-l); a quantization (Q) and inverse quantization (Q-l); entropy encoding (E); a reference frame memory (RFM); inter prediction (Pinter); intra prediction (Pintra); mode selection (MS) and filtering (F).
[0040] An example of a decoding process is illustrated in Figure 3b. Figure 3b illustrates a predicted representation of an image block (P'n); a reconstructed prediction error signal (D'n); a preliminary reconstructed image (I'n); a final reconstructed image (R'n); an inverse transform (T-l); an inverse quantization (Q-l); an entropy decoding (E-l); a reference frame memory (RFM); a prediction (either inter or intra) (P); and filtering (F).
[0041 ] Many hybrid video encoders, such as H.264/AVC encoders and High Efficiency Video Coding (H.265/HEVC a.k.a. HEVC) encoders, encode the video information in two phases. Firstly pixel values in a certain picture area (or “block”) are predicted for example by motion compensation means (finding and indicating an area in one of the previously coded video frames that corresponds closely to the block being coded) or by spatial means (using the pixel values around the block to be coded in a specified manner). Secondly the prediction error, i.e. the difference between the predicted block of pixels and the original block of pixels, is coded. This is typically done by transforming the difference in pixel values using a specified transform (e.g. Discrete Cosine Transform (DCT) or a variant of it), quantizing the coefficients and entropy coding the quantized coefficients. By varying the fidelity of the quantization process, encoder can control the balance between the accuracy of the pixel representation (picture quality) and size of the resulting coded video representation (file size or transmission bitrate). Video codecs may also provide a transform skip mode, which the encoders may choose to use. In the transform skip mode, the prediction error is coded in a sample domain, for example by deriving a sample-wise difference value relative to certain adjacent samples and coding the sample-wise difference value with an entropy coder.
[0042] In temporal prediction, the sources of prediction are previously decoded pictures (a.k.a. reference pictures). In intra block copy (IBC; a.k.a. intra-block-copy prediction), prediction is applied similarly to temporal prediction but the reference picture is the current picture and only previously decoded samples can be referred in the prediction process. Inter layer or inter- view prediction may be applied similarly to temporal prediction, but the reference picture is a decoded picture from another scalable layer or from another view, respectively. In some cases, inter prediction may refer to temporal prediction only, while in other cases inter prediction may refer collectively to temporal prediction and any of intra block copy, inter-layer prediction, and inter- view prediction provided that they are performed with the same or similar process than temporal prediction. Inter prediction or temporal prediction may sometimes be referred to as motion compensation or motion-compensated prediction.
[0043] Inter prediction, which may also be referred to as temporal prediction, motion compensation, or motion-compensated prediction, reduces temporal redundancy. In inter prediction the sources of prediction are previously decoded pictures. Intra prediction utilizes the fact that adjacent pixels within the same picture are likely to be correlated. Intra prediction can be performed in spatial or transform domain, i.e., either sample values or transform coefficients can be predicted. Intra prediction is typically exploited in intra coding, where no inter prediction is applied.
[0044] In many video codecs, including H.264/AVC and HEVC, motion information is indicated by motion vectors associated with each motion compensated image block. Each of these motion vectors represents the displacement of the image block in the picture to be coded (in the encoder) or decoded (at the decoder) and the prediction source block in one of the previously coded or decoded images (or picture).
[0045] One outcome of the coding procedure is a set of coding parameters, such as motion vectors and quantized transform coefficients. Many parameters can be entropy-coded more efficiently if they are predicted first from spatially or temporally neighboring parameters. For example, a motion vector may be predicted from spatially adjacent motion vectors and only the difference relative to the motion vector predictor may be coded. Prediction of coding parameters and intra prediction may be collectively referred to as in-picture prediction.
[0046] Available media file format standards include International Standards Organization (ISO) base media file format (ISO/IEC 14496-12, which may be abbreviated ISOBMFF), Moving Picture Experts Group (MPEG)-4 file format (ISO/IEC 14496-14, also known as the MP4 format), file format for NAL (Network Abstraction Layer) unit structured video (ISO/IEC 14496-15) and High Efficiency Video Coding standard (HEVC or H.265/HEVC). [0047] Some concepts, structures, and specifications of ISOBMFF are described below as an example of a container file format, based on which the embodiments may be implemented. The aspects of the invention are not limited to ISOBMFF, but rather the description is given for one possible basis on top of which the invention may be partly or fully realized.
[0048] A basic building block in the ISO base media file format is called a box. Each box has a header and a payload. The box header indicates the type of the box and the size of the box in terms of bytes. A box may enclose other boxes, and the ISO file format specifies which box types are allowed within a box of a certain type. Furthermore, the presence of some boxes may be mandatory in each file, while the presence of other boxes may be optional. Additionally, for some box types, it may be allowable to have more than one box present in a file. Thus, the ISO base media file format may be considered to specify a hierarchical structure of boxes.
[0049 ] According to the ISO family of file formats, a file includes media data and metadata that are encapsulated into boxes. Each box is identified by a four character code (4CC) and starts with a header which informs about the type and size of the box.
[0050] In files conforming to the ISO base media file format, the media data may be provided in one or more instances of MediaDataBox (‘mdat‘) and the MovieBox (‘moov’) may be used to enclose the metadata for timed media. In some cases, for a file to be operable, both of the ‘mdat’ and ‘moov’ boxes may be required to be present. The ‘moov’ box may include one or more tracks, and each track may reside in one corresponding TrackBox (‘trak’). Each track is associated with a handler, identified by a four-character code, specifying the track type. Video, audio, and image sequence tracks can be collectively called media tracks, and they contain an elementary media stream. Other track types comprise hint tracks and timed metadata tracks.
[0051 ] Tracks comprise samples, such as audio or video frames. For video tracks, a media sample may correspond to a coded picture or an access unit.
[0052] A media track refers to samples (which may also be referred to as media samples) formatted according to a media compression format (and its encapsulation to the ISO base media file format). A hint track refers to hint samples, containing cookbook instructions for constructing packets for transmission over an indicated communication protocol. A timed metadata track may refer to samples describing referred media and/or hint samples. [0053] A sample grouping in the ISO base media file format and its derivatives, such as the advanced video coding (AVC) file format and the scalable video coding (SVC) file format, may be defined as an assignment of each sample in a track to be a member of one sample group, based on a grouping criterion. A sample group in a sample grouping is not limited to being contiguous samples and may contain non-adjacent samples. As there may be more than one sample grouping for the samples in a track, each sample grouping may have a type field to indicate the type of grouping. Sample groupings may be represented by two linked data structures: (1) a SampleToGroupBox (sbgp box) represents the assignment of samples to sample groups; and (2) a SampleGroupDescriptionBox (sgpd box) contains a sample group entry for each sample group describing the properties of the group. There may be multiple instances of the SampleToGroupBox and SampleGroupDescriptionBox based on different grouping criteria. These may be distinguished by a type field used to indicate the type of grouping. SampleToGroupBox may comprise a grouping_type_parameter field that can be used e.g. to indicate a sub-type of the grouping.
[0054] In ISOMBFF, an edit list provides a mapping between the presentation timeline and the media timeline. Among other things, an edit list provides for the linear offset of the presentation of samples in a track, provides for the indication of empty times and provides for a particular sample to be dwelled on for a certain period of time. The presentation timeline may be accordingly modified to provide for looping, such as for the looping videos of the various regions of the scene. One example of the box that includes the edit list, the EditListBox, is provided below: aligned(8) class EditListBox extends FullBox('elst', version, flags) { unsigned int(32) entry_count; for (i=l; i <= entry_count; i++) { if (version==l) { unsigned int(64) segment_duration; int(64) media_time;
} else { // version==0 unsigned int(32) segment_duration; int(32) media_time;
} int(16) media_rate_integer; int(16) media rate fraction = 0; }
}
[0055] In ISOBMFF, an EditListBox may be contained in EditBox, which is contained in TrackBox ('trak'). In this example of the edit list box, flags specifies the repetition of the edit list. By way of example, setting a specific bit within the box flags (the least significant bit, i.e., flags & 1 in ANSI-C notation, where & indicates a bit-wise AND operation) equal to 0 specifies that the edit list is not repeated, while setting the specific bit (i.e., flags & 1 in ANSI- C notation) equal to 1 specifies that the edit list is repeated. The values of box flags greater than 1 may be defined to be reserved for future extensions. As such, when the edit list box indicates the playback of zero or one samples, (flags & 1) shall be equal to zero. When the edit list is repeated, the media at time 0 resulting from the edit list follows immediately the media having the largest time resulting from the edit list such that the edit list is repeated seamlessly.
[0056] In ISOBMFF, a Track group enables grouping of tracks based on certain characteristics or the tracks within a group have a particular relationship. Track grouping, however, does not allow any image items in the group.
[0057] The syntax of TrackGroupBox in ISOBMFF is as follows: aligned (8) class TrackGroupBox extends Box('trgr') {
} aligned (8) class TrackGroupTypeBox(unsigned int(32) track_group_type) extends FullBox(track_group_type, version =
0, flags = 0)
{ unsigned int(32) track_group_id;
// the remaining data may be specified for a particular track group type }
[0058] wherein track_group_type indicates the grouping_type and shall be set to one of the following values, or a value registered, or a value from a derived specification or registration:
'msrc' indicates that this track belongs to a multi-source presentation. The tracks that have the same value oftrack_group_id within a TrackGroupTypeBox of track_group_type 'msrc' are mapped as being originated from the same source. For example, a recording of a video telephony call may have both audio and video for both participants, and the value oftrack_group_id associated with the audio track and the video track of one participant differs from value oftrack_group_id associated with the tracks of the other participant.
[0059] The pair oftrack_group_id and track_group_type identifies a track group within the file. The tracks that contain a particular TrackGroupTypeBox having the same value oftrack_group_id and track_group_type belong to the same track group.
[0060] The Entity grouping is similar to track grouping but enables grouping of both tracks and image items in the same group.
[0061] The syntax of EntityToGroupBox in ISOBMFF is as follows. aligned (8) class EntityToGroupBox(grouping_type, version, flags) extends FullBox(grouping_type, version, flags) { unsigned int(32) group_id; unsigned int(32) num_entities_in_group; for(i=0; i<num_entities_in_group; i++) unsigned int(32) entity_id;
}
[0062] wherein group_id is a non-negative integer assigned to the particular grouping that shall not be equal to any group_id value of any other EntityToGroupBox, any item_ID value of the hierarchy level (file, movie or track) that contains the GroupsListBox, or any track_ID value (when the GroupsListBox is contained in the file level).
[0063] num_entities_in_group specifies the number ofentity_id values mapped to this entity group.
[0064] entity_id is resolved to an item, when an item with item_ID equal to entity_id is present in the hierarchy level (file, movie or track) that contains the GroupsListBox, or to a track, when a track with track_ID equal to entity_id is present and the GroupsListBox is contained in the file level.
[0065] Files conforming to the ISOBMFF may contain any non-timed objects, referred to as items, meta items, or metadata items, in a meta box (a.k.a. Metabox, four-character code: ‘meta’). While the name of the meta box refers to metadata, items can generally contain metadata or media data. The meta box may reside at the top level of the file, within a movie box (four-character code: ‘moov’), and within a track box (four-character code: ‘trak’), but at most one meta box may occur at each of the file level, movie level, or track level. The meta box may be required to contain a ‘hdlr’ box indicating the structure or format of the ‘meta’ box contents. The meta box may list and characterize any number of items that can be referred and each one of them can be associated with a file name and are uniquely identified with the file by item identifier (item id) which is an integer value. The metadata items may be for example stored in the 'idaf box of the meta box or in an 'mdaf box or reside in a separate file. If the metadata is located external to the file then its location may be declared by the DatalnformationBox (four-character code: ‘dinf ). In the specific case that the metadata is formatted using extensible Markup Language (XML) syntax and is required to be stored directly in the MetaBox, the metadata may be encapsulated into either the XMLBox (four- character code: ‘xml ‘) or the BinaryXMLBox (four-character code: ‘bxmT). An item may be stored as a contiguous byte range, or it may be stored in several extents, each being a contiguous byte range. In other words, items may be stored fragmented into extents, e.g. to enable interleaving. An extent is a contiguous subset of the bytes of the resource. The resource can be formed by concatenating the extents.
[0066] The ItemPropertiesBox enables the association of any item with an ordered set of item properties. Item properties may be regarded as small data records. The ItemPropertiesBox consists of two parts: ItemPropertyContainerBox that contains an implicitly indexed list of item properties, and one or more ItemPropertyAssociationBox(es) that associate items with item properties.
[0067] Hypertext Transfer Protocol (HTTP) has been widely used for the delivery of real time multimedia content over the Internet, such as in video streaming applications. Several commercial solutions for adaptive streaming over HTTP, such as Microsoft® Smooth Streaming, Apple® Adaptive HTTP Live Streaming and Adobe® Dynamic Streaming, have been launched as well as standardization projects have been carried out. Adaptive HTTP streaming (AHS) was first standardized in Release 9 of 3rd Generation Partnership Project (3GPP) packet-switched streaming (PSS) service (3GPP TS 26.234 Release 9: “Transparent end-to-end packet-switched streaming service (PSS); protocols and codecs”). MPEG took 3GPP AHS Release 9 as a starting point for the MPEG DASH standard (ISO/IEC 23009-1 : “Dynamic adaptive streaming over HTTP (DASH)-Part 1 : Media presentation description and segment formats,”). MPEG DASH and 3GP-DASH are technically close to each other and may therefore be collectively referred to as DASH. Some concepts, formats and operations of DASH are described below as an example of a video streaming system, wherein the embodiments may be implemented. The aspects of the invention are not limited to the above standard documents but rather the description is given for one possible basis on top of which the invention may be partly or fully realized.
[0068] In DASH, the multimedia content may be stored on an HTTP server and may be delivered using HTTP. The content may be stored on the server in two parts: Media Presentation Description (MPD), which describes a manifest of the available content, its various alternatives, their URL addresses, and other characteristics; and segments, which contain the actual multimedia bitstreams in the form of chunks, in a single or multiple files. The MDP provides the necessary information for clients to establish a dynamic adaptive streaming over HTTP. The MPD contains information describing media presentation, such as an HTTP -uniform resource locator (URL) of each Segment to make GET Segment request. [0069] To play the content, the DASH client may obtain the MPD e.g. by using HTTP, email, thumb drive, broadcast, or other transport methods. By parsing the MPD, the DASH client may become aware of the program timing, media-content availability, media types, resolutions, minimum and maximum bandwidths, and the existence of various encoded alternatives of multimedia components, accessibility features and required digital rights management (DRM), media-component locations on the network, and other content characteristics. Using this information, the DASH client may select the appropriate encoded alternative and start streaming the content by fetching the segments using e.g. HTTP GET requests. After appropriate buffering to allow for network throughput variations, the client may continue fetching the subsequent segments and also monitor the network bandwidth fluctuations. The client may decide how to adapt to the available bandwidth by fetching segments of different alternatives (with lower or higher bitrates) to maintain an adequate buffer.
[0070] In the context of DASH, the following definitions may be used: A media content component or a media component may be defined as one continuous component of the media content with an assigned media component type that can be encoded individually into a media stream. Media content may be defined as one media content period or a contiguous sequence of media content periods. Media content component type may be defined as a single type of media content such as audio, video, or text. A media stream may be defined as an encoded version of a media content component.
[0071 ] In DASH, a hierarchical data model is used to structure media presentation as follows. A media presentation consists of a sequence of one or more Periods, each Period contains one or more Groups, each Group contains one or more Adaptation Sets, each Adaptation Sets contains one or more Representations, each Representation consists of one or more Segments. A Group may be defined as a collection of Adaptation Sets that are not expected to be presented simultaneously. An Adaptation Set may be defined as a set of interchangeable encoded versions of one or several media content components. A Representation is one of the alternative choices of the media content or a subset thereof typically differing by the encoding choice, e.g. by bitrate, resolution, language, codec, etc. The Segment contains certain duration of media data, and metadata to decode and present the included media content. A Segment is identified by a URI and can typically be requested by a HTTP GET request. A Segment may be defined as a unit of data associated with an HTTP- URL and optionally a byte range that are specified by an MPD.
[0072] The DASH MPD complies with Extensible Markup Language (XML) and is therefore specified through elements and attributes as defined in XML. The MPD may be specified using the following conventions: Elements in an XML document may be identified by an upper-case first letter and may appear in bold face as Element. To express that an element Elementl is contained in another element Element2, one may write Element2.Elementl. If an element’s name consists of two or more combined words, camel casing may be used, such as ImportantElement, for example. Elements may be present either exactly once, or the minimum and maximum occurrence may be defined by <minOccurs> ... <maxOccurs>. Attributes in an XML document may be identified by a lower-case first letter as well as they may be preceded by a ‘@’-sign, e.g. @attribute, for example. To point to a specific attribute @attribute contained in an element Element, one may write Element@attribute. If an attribute’s name consists of two or more combined words, camel-casing may be used after the first word, such as @veryImportantAttribute, for example. Attributes may have assigned a status in the XML as mandatory (M), optional (O), optional with default value (OD) and conditionally mandatory (CM).
[0073 ] In DASH, all descriptor elements are typically structured in the same way, in that they contain a @schemeIdUri attribute that provides a URI to identify the scheme and an optional attribute @value and an optional attribute @id. The semantics of the element are specific to the scheme employed. The URI identifying the scheme may be a URN or a URL. Some descriptors are specified in MPEG-DASH (ISO/IEC 23009-1), while descriptors can additionally or alternatively be specified in other specifications. When specified in specifications other than MPEG-DASH, the MPD does not provide any specific information on how to use descriptor elements. It is up to the application or specification that employs DASH formats to instantiate the description elements with appropriate scheme information. Applications or specifications that use one of these elements define a Scheme Identifier in the form of a URI and the value space for the element when that Scheme Identifier is used. The Scheme Identifier appears in the @schemeIdUri attribute. In the case that a simple set of enumerated values are required, a text string may be defined for each value and this string may be included in the @value attribute. If structured data is required then any extension element or attribute may be defined in a separate namespace. The @id value may be used to refer to a unique descriptor or to a group of descriptors. In the latter case, descriptors with identical values for the attribute @id may be required to be synonymous, i.e. the processing of one of the descriptors with an identical value for @id is sufficient. Two elements of type DescriptorType are equivalent, if the element name, the value of the @schemeIdUri and the value of the @value attribute are equivalent. If the @schemeIdUri is a URN, then equivalence may refer to lexical equivalence as defined in clause 5 of RFC 2141. If the @schemeIdUri is a URL, then equivalence may refer to equality on a character-for-character basis as defined in clause 6.2.1 of RFC3986. If the @value attribute is not present, equivalence may be determined by the equivalence for @schemeIdUri only. Attributes and element in extension namespaces might not be used for determining equivalence. The @id attribute may be ignored for equivalence determination.
[0074] MPEG-DASH specifies descriptors EssentialProperty and SupplementalProperty. For the element EssentialProperty the Media Presentation author expresses that the successful processing of the descriptor is essential to properly use the information in the parent element that contains this descriptor unless the element shares the same @id with another EssentialProperty element. If EssentialProperty elements share the same @id, then processing one of the EssentialProperty elements with the same value for @id is sufficient. At least one EssentialProperty element of each distinct @id value is expected to be processed. If the scheme or the value for an EssentialProperty descriptor is not recognized the DASH client is expected to ignore the parent element that contains the descriptor. Multiple EssentialProperty elements with the same value for @id and with different values for @id may be present in an MPD.
[0075] For the element SupplementalProperty the Media Presentation author expresses that the descriptor contains supplemental information that may be used by the DASH client for optimized processing. If the scheme or the value for a SupplementalProperty descriptor is not recognized the DASH client is expected to ignore the descriptor. Multiple SupplementalProperty elements may be present in an MPD.
[0076] MPEG-DASH specifies a Viewpoint element that is formatted as a property descriptor. The @schemeIdUri attribute of the Viewpoint element is used to identify the viewpoint scheme employed. Adaptation Sets containing non-equivalent Viewpoint element values contain different media content components. The Viewpoint elements may equally be applied to media content types that are not video. Adaptation Sets with equivalent Viewpoint element values are intended to be presented together. This handling should be applied equally for recognized and unrecognized @schemeIdUri values.
[0077] An Initialization Segment may be defined as a Segment containing metadata that is necessary to present the media streams encapsulated in Media Segments. In ISOBMFF based segment formats, an Initialization Segment may comprise the Movie Box ('moov') which might not include metadata for any samples, i.e. any metadata for samples is provided in 'moof boxes.
[0078] A Media Segment contains certain duration of media data for playback at a normal speed, such duration is referred as Media Segment duration or Segment duration. The content producer or service provider may select the Segment duration according to the desired characteristics of the service. For example, a relatively short Segment duration may be used in a live service to achieve a short end-to-end latency. The reason is that Segment duration is typically a lower bound on the end-to-end latency perceived by a DASH client since a Segment is a discrete unit of generating media data for DASH. Content generation is typically done such a manner that a whole Segment of media data is made available for a server. Furthermore, many client implementations use a Segment as the unit for GET requests. Thus, in typical arrangements for live services a Segment can be requested by a DASH client only when the whole duration of Media Segment is available as well as encoded and encapsulated into a Segment. For on-demand service, different strategies of selecting Segment duration may be used.
[0079] A Segment may be further partitioned into Subsegments to enable downloading segments in multiple parts, for example. Subsegments may be required to contain complete access units. Subsegments may be indexed by Segment Index box, which contains information to map presentation time range and byte range for each Subsegment. The Segment Index box may also describe subsegments and stream access points in the segment by signaling their durations and byte offsets. A DASH client may use the information obtained from Segment Index box(es) to make a HTTP GET request for a specific Subsegment using byte range HTTP request. If a relatively long Segment duration is used, then Subsegments may be used to keep the size of HTTP responses reasonable and flexible for bitrate adaptation. The indexing information of a segment may be put in the single box at the beginning of that segment or spread among many indexing boxes in the segment. Different methods of spreading are possible, such as hierarchical, daisy chain, and hybrid, for example. This technique may avoid adding a large box at the beginning of the segment and therefore may prevent a possible initial download delay.
[0080] Sub-Representations are embedded in regular Representations and are described by the SubRepresentation element. SubRepresentation elements are contained in a Representation element. The SubRepresentation element describes properties of one or several media content components that are embedded in the Representation. It may for example describe the exact properties of an embedded audio component (such as codec, sampling rate, etc., for example), an embedded sub-title (such as codec, for example) or it may describe some embedded lower quality video layer (such as some lower frame rate, or otherwise, for example). Sub-Representations and Representation share some common attributes and elements. In case the @level attribute is present in the SubRepresentation element, the following applies:
Sub-Representations provide the ability for accessing a lower quality version of the Representation in which they are contained. In this case, Sub-Representations for example allow extracting the audio track in a multiplexed Representation or may allow for efficient fast-forward or rewind operations if provided with lower frame rate; The Initialization Segment and/or the Media Segments and/or the Index Segments shall provide sufficient information such that the data can be easily accessed through HTTP partial GET requests. The details on providing such information are defined by the media format in use.
[00811 When ISOBMFF Segments are used, the following applies:
The Initialization Segment contains the Fevel Assignment box.
~ The Subsegment Index box (‘ssix’) is present for each Subsegment.
The attribute @level specifies the level to which the described Sub-Representation is associated to in the Subsegment Index. The information in Representation, Sub- Representation and in the Fevel Assignment (‘leva’) box contains information on the assignment of media data to levels.
Media data should have an order such that each level provides an enhancement compared to the lower levels.
[0082] If the @level attribute is absent, then the SubRepresentation element is solely used to provide a more detailed description for media streams that are embedded in the Representation. [0083] The ISOBMFF includes the so-called level mechanism to specify subsets of the file. Levels follow the dependency hierarchy so that samples mapped to level n may depend on any samples of levels m, where m <= n, and do not depend on any samples of levels p, where p > n. For example, levels can be specified according to temporal sub-layer (e.g., Temporalld of HEVC). Levels may be announced in the Level Assignment ('leva') box contained in the Movie Extends ('mvex') box. Levels cannot be specified for the initial movie. When the Level Assignment box is present, it applies to all movie fragments subsequent to the initial movie. For the context of the Level Assignment box, a fraction is defined to consist of one or more Movie Fragment boxes and the associated Media Data boxes, possibly including only an initial part of the last Media Data Box. Within a fraction, data for each level appears contiguously. Data for levels within a fraction appears in increasing order of level value. All data in a fraction is assigned to levels. The Level Assignment box provides a mapping from features, such as scalability layers or temporal sub-layers, to levels. A feature can be specified through a track, a sub-track within a track, or a sample grouping of a track. For example, the Temporal Level sample grouping may be used to indicate a mapping of the pictures to temporal levels, which are equivalent to temporal sub-layers in HEVC. That is, HEVC pictures of a certain Temporalld value may be mapped to a particular temporal level using the Temporal Level sample grouping (and the same can be repeated for all Temporalld values). The Level Assignment box can then refer to the Temporal Level sample grouping in the indicated mapping to levels.
[0084] The Subsegment Index box (’ssix’) provides a mapping from levels (as specified by the Level Assignment box) to byte ranges of the indexed subsegment. In other words, this box provides a compact index for how the data in a subsegment is ordered according to levels into partial subsegments. It enables a client to easily access data for partial subsegments by downloading ranges of data in the subsegment. When the Subsegment Index box is present, each byte in the subsegment is assigned to a level. If the range is not associated with any information in the level assignment, then any level that is not included in the level assignment may be used. There is 0 or 1 Subsegment Index boxes present per each Segment Index box that indexes only leaf subsegments, i.e. that only indexes subsegments but no segment indexes. A Subsegment Index box, if any, is the next box after the associated Segment Index box. A Subsegment Index box documents the subsegment that is indicated in the immediately preceding Segment Index box. Each level may be assigned to exactly one partial subsegment, i.e. byte ranges for one level are contiguous. Levels of partial subsegments are assigned by increasing numbers within a subsegment, i.e., samples of a partial subsegment may depend on any samples of preceding partial subsegments in the same subsegment, but not the other way around. For example, each partial subsegment contains samples having an identical temporal sub-layer and partial subsegments appear in increasing temporal sub-layer order within the subsegment. When a partial subsegment is accessed in this way, the final Media Data box may be incomplete, that is, less data is accessed than the length indication of the Media Data Box indicates is present. The length of the Media Data box may need adjusting, or padding may be used. The padding flag in the Level Assignment Box indicates whether this missing data can be replaced by zeros. If not, the sample data for samples assigned to levels that are not accessed is not present, and care should be taken.
[0085] Virtual reality is a rapidly developing area of technology in which image or video content, sometimes accompanied by audio, is provided to a user device such as a user headset (a.k.a. head-mounted display, HMD). As is known, the user device may be provided with a live or stored feed from a content source, the feed representing a virtual space for immersive output through the user device. Compared to encoding and decoding conventional 2D video content, immersive multimedia, such as omnidirectional content consumption, is more complex to encode and decode for the end user. This is due to the higher degree of freedom available to the end user. Currently, many virtual reality user devices use so-called three degrees of freedom (3DoF), which means that the head movement in the yaw, pitch and roll axes are measured and determine what the user sees, i.e. to determine the viewport. This freedom also results in more uncertainty. The situation is further complicated when layers of content are rendered, e.g., in case of overlays.
[0086] As used herein the term omnidirectional may refer to media content that has greater spatial extent than a field-of-view of a device rendering the content. Omnidirectional content may for example cover substantially 360 degrees in the horizontal dimension and substantially 180 degrees in the vertical dimension, but omnidirectional may also refer to content covering less than 360 degree view in the horizontal direction and/or 180 degree view in the vertical direction.
[0087] Terms 360-degree video or virtual reality (VR) video may sometimes be used interchangeably. They may generally refer to video content that provides such a large field of view that only a part of the video is displayed at a single point of time in typical displaying arrangements. For example, VR video may be viewed on a head-mounted display (HMD) that may be capable of displaying e.g. about 100-degree field of view. The spatial subset of the VR video content to be displayed may be selected based on the orientation of the HMD. In another example, a typical flat-panel viewing environment is assumed, wherein e.g. up to 40- degree field-of-view may be displayed. When displaying wide-FOV content (e.g. fisheye) on such a display, it may be preferred to display a spatial subset rather than the entire picture. [0088] MPEG Omnidirectional Media Format (ISO/IEC 23090-2) is a virtual reality (VR) system standard. OMAF defines a media format (comprising both file format derived from ISOBMFF and streaming formats for DASH and MPEG Media Transport). OMAF version 1 supports 360° video, images, and audio, as well as the associated timed text and facilitates three degrees of freedom (3DoF) content consumption, meaning that a viewport can be selected with any azimuth and elevation range and tilt angle that are covered by the omnidirectional content but the content is not adapted to any translational changes of the viewing position.
[0089] Standardization of OMAF version 2 (MPEG-I Phase lb) is ongoing. OMAF v2 is planned to include features like support for multiple viewpoints, overlays, sub-picture compositions, and six degrees of freedom with a viewing space limited roughly to upper-body movements only.
[0090] A viewpoint may be defined as the point or space from which the user views the scene; it usually corresponds to a camera position. Slight head motion does not imply a different viewpoint. As used herein the term “observation point or Viewpoint” refers to a volume in a three-dimensional space for virtual reality audio/video acquisition or playback. A Viewpoint is trajectory, such as a circle, a region, or a volume, around the centre point of a device or rig used for omnidirectional audio/video acquisition and the position of the observer's head in the three-dimensional space in which the audio and video tracks are located. In some cases, an observer's head position is tracked and the rendering is adjusted for head movements in addition to head rotations, and then a Viewpoint may be understood to be an initial or reference position of the observer's head. In implementations utilizing DASH (Dynamic adaptive streaming over HTTP), each observation point may be defined as a viewpoint by a viewpoint property descriptor. The definition may be stored in ISOBMFF or OMAF type of file format. The delivery could be HFS (HTTP Five Streaming), RTSP/RTP (Real Time Streaming Protocol/Real-time Transport Protocol) streaming in addition to DASH.
[0091 ] As used herein, the term “spatially related Viewpoint group” refers to Viewpoints which have content that has a spatial relationship between them. For example, content captured by VR cameras at different locations in the same basketball court or a music concert captured from different locations on the stage. [0092] As used herein, the term “logically related Viewpoint group” refers to related Viewpoints which do not have a clear spatial relationship but are logically related. The relative position of logically related Viewpoints is described based on the creative intent. For example, two Viewpoints that are members of a logically related Viewpoint group may correspond to content from the performance area and the dressing room. Another example could be two Viewpoints from the dressing rooms of the two competing teams that form a logically related Viewpoint group to permit users to traverse between both teams to see the player reactions.
[0093] As used herein, Viewpoints qualifying according to above definitions of the spatially related Viewpoint group and logically related Viewpoint group may be commonly referred to as mutually related Viewpoints, sometimes also as a mutually related Viewpoint group.
[0094] As used herein, the term “random access” may refer to the ability of a decoder to start decoding a stream at a point other than the beginning of the stream and recover an exact or approximate reconstructed media signal, such as a representation of the decoded pictures.
A random access point and a recovery point may be used to characterize a random access operation. A random access point may be defined as a location in a media stream, such as an access unit or a coded picture within a video bitstream, where decoding can be initiated. A recovery point may be defined as a first location in a media stream or within the reconstructed signal characterized in that all media, such as decoded pictures, at or subsequent to a recovery point in output order are correct or approximately correct in content, when the decoding has started from the respective random access point. If the random access point is the same as the recovery point, the random access operation is instantaneous; otherwise, it may be gradual. [0095] Random access points enable, for example, seek, fast forward play, and fast backward play operations in locally stored media streams as well as in media streaming. In contexts involving on-demand streaming, servers can respond to seek requests by transmitting data starting from the random access point that is closest to (and in many cases preceding) the requested destination of the seek operation and/or decoders can start decoding from the random access point that is closest to (and in many cases preceding) the requested destination of the seek operation. Switching between coded streams of different bit-rates is a method that is used commonly in unicast streaming to match the transmitted bitrate to the expected network throughput and to avoid congestion in the network. Switching to another stream is possible at a random access point. Furthermore, random access points enable tuning in to a broadcast or multicast. In addition, a random access point can be coded as a response to a scene cut in the source sequence or as a response to an intra picture update request.
[0096] MPEG Omnidirectional Media Format (OMAF) is described in the following by referring to Figure 4. A real-world audio-visual scene (A) is captured by audio sensors as well as a set of cameras or a camera device with multiple lenses and sensors. The acquisition results in a set of digital image/video (Bi) and audio (Ba) signals. The cameras/lenses typically cover all directions around the center point of the camera set or camera device, thus the name of 360-degree video.
[0097] Audio can be captured using many different microphone configurations and stored as several different content formats, including channel-based signals, static or dynamic (i.e. moving through the 3D scene) object signals, and scene-based signals (e.g., Higher Order Ambisonics). The channel-based signals typically conform to one of the loudspeaker layouts defined in CICP. In an omnidirectional media application, the loudspeaker layout signals of the rendered immersive audio program are binaraulized for presentation via headphones. [0098] The images (Bi) of the same time instance are stitched, projected, and mapped onto a packed picture (D).
[0099] For monoscopic 360-degree video, the input images of one time instance are stitched to generate a projected picture representing one view. The breakdown of image stitching, projection, and region- wise packing process for monoscopic content is illustrated with Figure 5a and described as follows. Input images (Bi) are stitched and projected onto a three-dimensional projection structure that may for example be a unit sphere. The projection structure may be considered to comprise one or more surfaces, such as plane(s) or part(s) thereof. A projection structure may be defined as three-dimensional structure consisting of one or more surface(s) on which the captured VR image/video content is projected, and from which a respective projected picture can be formed. The image data on the projection structure is further arranged onto a two-dimensional projected picture (C). The term projection may be defined as a process by which a set of input images are projected onto a projected frame. There may be a pre-defined set of representation formats of the projected picture, including for example an equirectangular projection (ERP) format and a cube map projection (CMP) format. It may be considered that the projected picture covers the entire sphere.
[0100] Optionally, region- wise packing is then applied to map the projected picture onto a packed picture. If the region-wise packing is not applied, the packed picture is identical to the projected picture, and this picture is given as input to image/video encoding. Otherwise, regions of the projected picture are mapped onto a packed picture (D) by indicating the location, shape, and size of each region in the packed picture, and the packed picture (D) is given as input to image/video encoding. The term region-wise packing may be defined as a process by which a projected picture is mapped to a packed picture. The term packed picture may be defined as a picture that results from region- wise packing of a projected picture.
[O S 01 ] In the case of stereoscopic 360-degree video, the input images of one time instance are stitched to generate a projected picture representing two views, one for each eye. Both views can be mapped onto the same packed picture, as described below in relation to the Figure 5b, and encoded by a traditional 2D video encoder. Alternatively, each view of the projected picture can be mapped to its own packed picture, in which case the image stitching, projection, and region-wise packing is like described above with the Figure 5a. A sequence of packed pictures of either the left view or the right view can be independently coded or, when using a multiview video encoder, predicted from the other view.
[0102] The breakdown of image stitching, projection, and region- wise packing process for stereoscopic content where both views are mapped onto the same packed picture is illustrated with the Figure 5b and described as follows. Input images (Bi) are stitched and projected onto two three-dimensional projection structures, one for each eye. The image data on each projection structure is further arranged onto a two-dimensional projected picture (CL for left eye, CR for right eye), which covers the entire sphere. Frame packing is applied to pack the left view picture and right view picture onto the same projected picture. Optionally, region- wise packing is then applied to the pack projected picture onto a packed picture, and the packed picture (D) is given as input to image/video encoding. If the region-wise packing is not applied, the packed picture is identical to the projected picture, and this picture is given as input to image/video encoding.
[0103] The image stitching, projection, and region-wise packing process can be carried out multiple times for the same source images to create different versions of the same content, e.g. for different orientations of the projection structure. Similarly, the region- wise packing process can be performed multiple times from the same projected picture to create more than one sequence of packed pictures to be encoded.
[0104] 360-degree panoramic content (i.e., images and video) cover horizontally the full 360-degree field-of-view around the capturing position of an imaging device. The vertical field-of-view may vary and can be e.g. 180 degrees. Panoramic image covering 360-degree field-of-view horizontally and 180-degree field-of-view vertically can be represented by a sphere that can be mapped to a bounding cylinder that can be cut vertically to form a 2D picture (this type of projection is known as equirectangular projection). The process of forming a monoscopic equirectangular panorama picture is illustrated in Figure 6. A set of input images, such as fisheye images of a camera array or a camera device with multiple lenses and sensors, is stitched onto a spherical image. The spherical image is further projected onto a cylinder (without the top and bottom faces). The cylinder is unfolded to form a two- dimensional projected frame. In practice one or more of the presented steps may be merged; for example, the input images may be directly projected onto a cylinder without an intermediate projection onto a sphere. The projection structure for equirectangular panorama may be considered to be a cylinder that comprises a single surface.
[0105] In general, 360-degree content can be mapped onto different types of solid geometrical structures, such as polyhedron (i.e. a three-dimensional solid object containing flat polygonal faces, straight edges and sharp comers or vertices, e.g., a cube or a pyramid), cylinder (by projecting a spherical image onto the cylinder, as described above with the equirectangular projection), cylinder (directly without projecting onto a sphere first), cone, etc. and then unwrapped to a two-dimensional image plane.
[0106] In some cases panoramic content with 360-degree horizontal field-of-view but with less than 180-degree vertical field-of-view may be considered special cases of panoramic projection, where the polar areas of the sphere have not been mapped onto the two- dimensional image plane. In some cases a panoramic image may have less than 360-degree horizontal field-of-view and up to 180-degree vertical field-of-view, while otherwise has the characteristics of panoramic projection format.
[0107] OMAF allows the omission of image stitching, projection, and region- wise packing and encode the image/video data in their captured format. In this case, images D are considered the same as images Bi and a limited number of fisheye images per time instance are encoded.
[ 108] For audio, the stitching process is not needed, since the captured signals are inherently immersive and omnidirectional.
[0109] The stitched images (D) are encoded as coded images (Ei) or a coded video bitstream (Ev). The captured audio (Ba) is encoded as an audio bitstream (Ea). The coded images, video, and/or audio are then composed into a media file for file playback (F) or a sequence of an initialization segment and media segments for streaming (Fs), according to a particular media container file format. In this specification, the media container file format is the ISO base media file format. The file encapsulator also includes metadata into the file or the segments, such as projection and region- wise packing information assisting in rendering the decoded packed pictures.
[01 i 0] The metadata in the file may include:
- the projection format of the projected picture,
- fisheye video parameters,
- the area of the spherical surface covered by the packed picture,
- the orientation of the projection structure corresponding to the projected picture relative to the global coordinate axes,
- region-wise packing information, and
- region- wise quality ranking (optional).
[011 1 j The segments Fs are delivered using a delivery mechanism to a player.
[0112] The file that the file encapsulator outputs (F) is identical to the file that the file decapsulator inputs (F'). A file decapsulator processes the file (F') or the received segments (F’s) and extracts the coded bitstreams (E’a, EV, and/or E’i) and parses the metadata. The audio, video, and/or images are then decoded into decoded signals (B'a for audio, and D' for images/video). The decoded packed pictures (D') are projected onto the screen of a head- mounted display or any other display device based on the current viewing orientation or viewport and the projection, spherical coverage, projection structure orientation, and region- wise packing metadata parsed from the file. Likewise, decoded audio (B'a) is rendered, e.g. through headphones, according to the current viewing orientation. The current viewing orientation is determined by the head tracking and possibly also eye tracking functionality. Besides being used by the renderer to render the appropriate part of decoded video and audio signals, the current viewing orientation may also be used the video and audio decoders for decoding optimization.
[0113] The process described above is applicable to both live and on-demand use cases.
[0114] The human eyes are not capable of viewing the whole 360 degrees space, but are limited to a maximum horizontal and vertical FoVs (HHFoV, HVFoV). Also, a HMD device has technical limitations that allow only viewing a subset of the whole 360 degrees space in horizontal and vertical directions (DHFoV, DVFoV)).
[01153 At any point of time, a video rendered by an application on a HMD renders a portion of the 360 degrees video. This portion is defined here as viewport. A viewport may be defined as a region of omnidirectional image or video suitable for display and viewing by the user. A current viewport (which may be sometimes referred simply as a viewport) may be defined as the part of the spherical video that is currently displayed and hence is viewable by the user(s). At any point of time, a video rendered by an application on a head-mounted display (HMD) renders a portion of the 360-degrees video, which is referred to as a viewport. Likewise, when viewing a spatial part of the 360-degree content on a conventional display, the spatial part that is currently displayed is a viewport. A viewport is a window on the 360- degree world represented in the omnidirectional video displayed via a rendering display. A viewport may be characterized by a horizontal field-of-view (VHFoV) and a vertical field-of- view (VVFoV). In the following, the horizontal field-of-view of the viewport will be abbreviated with HFoV and, respectively, the vertical field-of-view of the viewport will be abbreviated with VFoV.
[ 116] A sphere region may be defined as a region on a sphere that may be specified by four great circles or by two azimuth circles and two elevation circles and additionally by a tile angle indicating rotation along the axis originating from the sphere origin passing through the center point of the sphere region. A great circle may be defined as an intersection of the sphere and a plane that passes through the center point of the sphere. A great circle is also known as an orthodrome or Riemannian circle. An azimuth circle may be defined as a circle on the sphere connecting all points with the same azimuth value. An elevation circle may be defined as a circle on the sphere connecting all points with the same elevation value.
[0117] The coordinate system of OMAF consists of a unit sphere and three coordinate axes, namely the X (back-to-front) axis, the Y (lateral, side-to-side) axis, and the Z (vertical, up) axis, where the three axes cross at the centre of the sphere. The location of a point on the sphere is identified by a pair of sphere coordinates azimuth (f) and elevation (Q).
[0 P 8 J The Omnidirectional Media Format (“OMAF”) standard (ISO/IEC 23090-2) specifies a generic timed metadata syntax for sphere regions. A purpose for the timed metadata track is indicated by the track sample entry type. The sample format of all metadata tracks for sphere regions specified starts with a common part and may be followed by an extension part that is specific to the sample entry of the metadata track. Each sample specifies a sphere region.
[0119] OMAF version 2 introduces new concepts such as Viewpoints and specifies ISOBMFF metadata and DASH MPD signaling for Viewpoints.
[0120] OMAF version 2 allows Viewpoints to be static, e.g., Viewpoints may be captured by 360° video cameras at fixed positions. Moreover, OMAF version 2 allows Viewpoints to be dynamic, e.g., a Viewpoint may be captured by a 360° video camera mounted on a flying drone. Metadata and signaling for both static and dynamic Viewpoints are supported in OMAF v2. [0121 ] OMAF version 2 enables the switching between mutually related Viewpoints to be seamless in the sense that after switching the user still sees the same object, e.g., the same player in a sport game, just from a different viewing angle. The term viewpoint group is defined in OMAF version 2 and may comprise mutually related Viewpoints. However, when Viewpoints are not mutually related, switching between the two Viewpoints may incur a noticeable cut or transition.
[0122] When multiple Viewpoints exist, identification and association of tracks or image items belonging to one Viewpoint may be needed. For this purpose, OMAF v2 specifies the viewpoint entity grouping. Other possible file format mechanisms for this purpose in addition to or instead of entity grouping include a track group. Through this grouping mechanism, metadata for viewpoint is signaled, to provide an identifier (ID) of the Viewpoint and a set of other information that can be used to assist streaming of the content and switching between different Viewpoints. Such information may include:
A (textual) label, for annotation of the viewpoint.
~ Mapping of the viewpoint to a viewpoint group of an indicated viewpoint group ID. This information provides a means to indicate whether the switching between two particular viewpoints can be seamless, and if not, the client does not need to bother trying it.
Viewpoint position relative to the common reference coordinate system shared by all viewpoints of a viewpoint group. The purpose of having the viewpoint position, and having the 3D coordinates specified in a high-precision unit, is to enable a good user experience during viewpoint switching, provided that the client can properly utilize the positions of the two viewpoints involved in the switching in its rendering processing.
Rotation information for conversion between the global coordinate system of the viewpoint relative to the common reference coordinate system.
Optionally, rotation information for conversion between the common reference coordinate system and the compass points, such as the geomagnetic north.
Optionally, the GPS position of the viewpoint, which enables the client application positioning of a viewpoint into a real-world map, which can be user-friendly in certain scenarios.
Optionally, viewpoint switching information, which provides a number of switching transitions possible from the current viewpoint, and for each of these, information such as the destination viewpoint, the viewport to view after switching, the presentation time to start the playing back the destination viewpoint, and a recommended transition effect during switching (such as zoom-in, walk though, fade- to-black, or mirroring).
Optionally, viewpoint looping information indicating which time period of the presentation is looped and a maximum count how many times the time period is looped. The looping feature can be used for requesting end-user's input for initiating viewpoint switching.
[0123 ] For dynamic Viewpoints, the above information may be stored in timed metadata track(s) that may be time-synchronized with the media track(s).
[0124] The Committee Draft of OMAF version 2 from November 2019 contains the following data structures for ISOBMFF, which may be used for Viewpoints switching among other things.
[0125] aligned(8) ViewpointPosStruct() { signed int(32) viewpoint_pos_x; signed int(32) viewpoint_pos_y; signed int(32) viewpoint_pos_z;
}
[0126] aligned(8) class ViewpointGpsPositionStruct() { signed int(32) viewpoint_gpspos_longitude; signed int(32) viewpoint_gpspos_latitude; signed int(32) viewpoint_gpspos_altitude;
}
[0127] aligned(8) class ViewpointGeomagneticInfoStruct() { signed int(32) viewpoint_geomagnetic_yaw; signed int(32) viewpoint_geomagnetic_pitch; signed int(32) viewpoint_geomagnetic_roll;
}
[0128] aligned(8) class ViewpointGlobalCoordinateSysRotationStruct() { signed int(32) viewpoint_gcs_yaw; signed int(32) viewpoint_gcs_pitch; signed int(32) viewpoint_gcs_roll;
} aligned(8) class ViewpointGroupStruct() { unsigned int(8) vwpt_group_id; utf8string vwpt_group_description;
}
[01293 aligned(8) class ViewpointSwitchingListStruct() { unsigned int(8) num_viewpoint_switching; for (i = 0; i < num_viewpoint_switching; i++) { unsigned int(32) destination_viewpoint_id; unsigned int(2) viewing_orientation_in_destination_viewport_mode; unsigned int(l) transition_effect_flag; unsigned int(l) timeline_switching_offset_flag; unsigned int(l) viewpoint_switch_region_flag; bit(3) reserved = 0;
// which viewport to switch to in the destination viewpoint if (viewing_orientation_in_destination_viewport_mode == 1) SphereRegionStruct(0,0);
// definition of the destination viewport as a sphere region if (timeline_switching_offset_flag) ViewpointTimelineSwitchStruct(); if (transition_effect_flag) { unsigned int(8) transition_effect_type; if (transition_effect_type == 4) unsigned int(32) transition_video_track_id; if (transition_effect_type == 5) utf8string transition_video_URL;
} if (viewpoint_switch_region_flag) ViewpointSwitchRegionStruct();
}
}
[0130] aligned(8) class ViewpointTimelineSwitchStruct() { unsigned int(l) content_type; unsigned int(l) absolute_relative_t_offset_flag; unsigned int(l) min_time_flag; unsigned int(l) max_time_flag; bits(4) reserved=0;
// time window for activation of the switching if (min_time_flag) signed int (32) t_min; if (max_time_flag) signed int (32) t_max; if (absolute_relative_t_offset_flag == 0) unsigned int(32) absolute_t_offset; else unsigned int(32) relative_t_offset;
}
[0131] aligned(8) class ViewpointLoopingStruct () { unsigned int(l) max_loops_flag; unsigned int(l) destination_viewpoint_flag; unsigned int(l) loop_activation_flag; unsigned int(l) loop_start_flag; bit(4) reserved = 0; if (max_loops_flag) signed int(8) max_loops; // -1 for infinite loops if (destination_viewpoint_flag) unsigned int(16) destination_viewpoint_id; if (loop_activation_flag) signed int(32) loop_activation_time; if (loop_start_flag) signed int(32) loop_start_time;
}
[01.32] aligned(8) class ViewpointSwitchRegionStruct() { unsigned int(2) region_type; unsigned int(6) reserved = 0; if(region_type == 0) { // viewport relative position unsigned int(16) rect_left_percent; unsigned int(16) rect_top_percent; unsigned int(16) rect_width_percent; unsigned int(16) rect_height_percent;
} else if(region_type == 1) // sphere relative position
SphereRegionStruct(1,1); else if(region_type == 2) // overlay ref_overlay_id;
}
[0133] In OMAF DASH MPD, a Viewpoint element with a @schemeIdUri attribute equal to "um:mpeg:mpegl:omaf:2018:vwpt" is referred to as a viewpoint information (VWPT) descriptor. [0134] At most one VWPT descriptor may be present at adaptation set level and no VWPT descriptor shall be present at any other level. When no Adaptation Set in the Media Presentation contains a VWPT descriptor, the Media Presentation is inferred to be contain only one viewpoint.
[0135 ] The @value specifies the viewpoint ID of the viewpoint. The ViewPointlnfo is a Container element whose sub-elements and attributes provide information about the viewpoint. The ViewPointInfo@label attribute specifies a string that provides human readable label for the viewpoint. The ViewPointlnfo. Position attributes of this element specify the position information for the viewpoint.
[0136] Viewpoints may, for example, represent different viewing position to the same scene, or provide completely different scenes, e.g. in a virtual tourist tour. There can be different expectations for user experience when switching from one viewpoint to another, either expecting quick response by stopping the playback of the current viewpoint immediately and switching to the new viewpoint as soon as possible or ensuring content continuity over the switch and hence keep playing the current viewpoint as long as needed.
[0137] F or example :
For sports content, it is more important that the game is visible continuously even if there is perceived latency in terms of switching the viewpoint. Other such examples could be viewpoints which are part of the same group or viewpoints with common visual scene.
For multiple viewpoint content where the user is browsing through different viewpoints which are not part of the same viewpoint group, it would be more intuitive if the content switches as soon as possible. Other such examples could be viewpoints which are part of different viewpoint groups or viewpoints with different visual scenes.
[0138] Viewpoints may be used to realize alternative storylines. Several options for user- originated Viewpoint switching may be indicated and associated with different user interactions, such as different selectable regions for activating a switch to a particular Viewpoint. Viewpoint looping may be indicated and used e.g. for waiting end-user's choice between switching options.
[0139] Viewpoint switching in DASH streaming involves switching DASH adaptation sets. Network delay and bandwidth availability may lead to latency in delivery of the content corresponding to the destination viewpoint. Hence, the switch is hardly ever immediate, but either paused video (last decoded frame from the original viewpoint) or black frames may be shown to user:
Even if there is data available for both viewpoints (current and destination), video cannot be random accessed on frame basis, but on regular intervals only. In practice, access is possible on DASH segment or sub-segment level. Hence, discontinuity can be encountered due to mismatch between the switch point and the next or closest random access point.
Further, players with a single decoder can start decoding the destination viewpoint only after stopping the decoding of the current viewpoint. This can create a gap especially since hierarchical video coding structures delay the output from the decoder; the decoder may be able to output frames only after it has processed e.g. 8 frames.
[0140J On the other hand, if player tries to ensure continuous playback over the switch, single decoder restriction can introduce a short gap (time to re-initialize it for the new viewpoint data). Further, additional gap between the last played frame of the current viewpoint and first playable frame of the destination viewpoint can be experienced if DASH segments of the two viewpoints are not aligned, as player either need to skip some frames, or decode but skip rendering too old frames from destination viewpoint.
[0141] The existing OMAF v2 specification draft includes signaling for a possible transition effect to take place when switching between viewpoints. Some of the transition effects, however, require having decoded content available from both viewpoints, whereas some require additional decoding resources, hence making them more difficult to use in resource-constrained systems. Hence, it may be expected that switching without any transition effect is most common way to switch viewpoints.
[0142] The ViewpointTimelineSwitchStruct, on the other hand, contains the parameters t_min and t_max, which set the limits when the switch is enabled. E.g. if there is a viewpoint that is active only for 30 seconds, starting from 1 minute from the content start, then t_min is set as 60 sec, and t_max as 90 sec. The offsets (absolute and relative), on the other hand, specify the position in the destination viewpoint timeline where the playback must start. E.g. if there is a viewpoint that can be used to show again past events, then the offset is set accordingly. However, there is no signaling or hardcoded specification what should happen with the current viewpoint data during switching, i.e. whether the player should continue playing it or not. In practice, there is typically a delay in the order of hundreds of milliseconds to a few seconds before the playback of the new viewpoint can start due to network latencies etc.
[0143 ] As a result, there is a risk of inconsistent player behavior and unexpected user experience, if a player chooses to delay the switching instances to ensure continuous playback (e.g., if content is available for current viewpoint but not yet for destination viewpoint) or chooses to switch immediately to a new viewpoint with the associated risk of not having the content available to playback from the next playout sample.
[0144] The OMAF v2 specification allows also automated viewpoint switching to take place when a specified number (1 or more) of loops for the viewpoint content has been played, or if recommended viewport for multiple viewpoints timed metadata track is used. In the first case, players should be able to prepare for the switch by prefetching media for the next viewpoint. In the second case, if a player is reading timed metadata track with the same timeline as media tracks, i.e. not in advance, a viewpoint switch may come as a surprise to the player, similar to user interaction.
[0145] Now an improved method for viewpoint switching is introduced in order to at least alleviate the above problems.
[0146] The method according to an aspect, as shown in Figure 7, comprises encoding (700) omnidirectional video media content into at least a first viewpoint representation and a second viewpoint representation, said first and second viewpoint representation being associated with mutually related viewpoints; and encoding (700) metadata, in or along a bitstream comprising at the encoded first viewpoint representation, said metadata comprising an indication for controlling a switch at least from the first viewpoint representation to the second viewpoint representation upon playback.
[0147] Thus, for avoiding inconsistent player behavior and unexpected user experience upon a viewpoint switch during playback, the content author may provide an indication for controlling the viewpoint switch e.g. from the first viewpoint representation to the second viewpoint representation in a manner considered desirable by the content author. Knowing the type of content in the first and the second viewpoint representation associated with the mutually related viewpoints, the content author may have the best knowledge of what kind of viewpoint switch would be preferable between the first and the second viewpoint representations. For ensuring a satisfying user experience, the indication provided by the content author for controlling the viewpoint switch may provide the best results.
[0148] According to an embodiment, said indication comprises at least one parameter indicating at least one of the following: - in response to a playback action triggering a viewpoint switch, the switch from the first viewpoint representation to the second viewpoint representation is to be started at a next available switch point;
- in response to a playback action triggering a viewpoint switch, the switch from the first viewpoint representation to the second viewpoint representation is to be delayed until the second viewpoint representation has been decoded ready for rendering.
[0149] Thus, the content author may provide at least two options for controlling the viewpoint switch: 1) switch to the second viewpoint immediately, or at least as soon as possible, i.e. at the next available switch point, or 2) continue playback of the first viewpoint until the second viewpoint is ready to be rendered. In the first option, after switching to the second viewpoint, the content of the second viewpoint may not be ready for rendering, and a black screen or a transition effect may be displayed while waiting the second viewpoint to be ready for rendering. In the second option, while waiting the switching to the second viewpoint to take place, an indication about a viewpoint switch being in progress may be displayed.
[0150] The playback action triggering the viewpoint switch may relate at least to a user interaction, such as a user of a HMD turning and/or tilting head to a new viewport or a signalled request from the user to switch another viewpoint. The playback action triggering the viewpoint switch may also relate to an error situation in the playback, e.g. the receiving, decoding and/or rendering of the first viewpoint representation is interrupted for some reason, and this triggers the viewpoint switch to the second viewpoint representation.
[0151 ] According to an embodiment, said indication is configured to be encoded as a flag indicating whether the switch from the first viewpoint representation to the second viewpoint representation is to be started at the next available switch point or to be delayed until the second viewpoint representation has been decoded ready for rendering.
[0152] Thus, a flag, which may be referred to herein as switch_type_f lag, is used for indicating which one of the two options for controlling the viewpoint switch should be applied upon a playback action triggering a viewpoint switch.
[0153 ] According to an embodiment, a signalling of said indication (e.g. the flag) is configured to be carried out by at least one syntax element included in a ViewpointTimelineSwitchStruct syntax structure or any other suitable syntax structure for ISO/IEC 23090-2 (or similar omnidirectional media coding technology).
[0154] An example of including said at least one syntax element into ViewpointTimelineSwitchStruct syntax element is shown below. aligned(8) class ViewpointTimelineSwitchStruct() { unsigned int(l) content_type; unsigned int(l) absolute_relative_t_offset_flag; unsigned int(l) min_time_flag; unsigned int(l) max_time_flag; unsigned int(l) switch_type_flag; bits(3) reserved=0;
// time window for activation of the switching if (min_time_flag) signed int (32) t_min; if (max_time_flag) signed int (32) t_max; if (absolute_relative_t_offset_flag == 0) unsigned int(32) absolute_t_offset; else unsigned int(32) relative_t_offset;
}
[0155] If the switch_type_flag is set to 1, the player should respond to the user interaction (or signaled request) immediately and switch to the new viewpoint immediately (or as soon as possible). The switch may take place with a period of black screen or via a transition effect, if a transition effect is configured for the switch.
[0156] If the switch_type_flag is set to 0, the player should try to ensure video and audio content continuity over the switch and continue playing the current viewpoint until playable content is available for the destination viewpoint. The player may show some indication that the switch is in progress.
[0157] According to an embodiment, a signalling of said indication (e.g. the flag) is configured to be carried out by at least one syntax element included in a VWPT descriptor, e.g. within its ViewPointlnfo element or any other suitable syntax structure for ISO/IEC 23090-2 (or similar omnidirectional media coding technology).
[0158] It is noted that audio may be common between the viewpoints, and it can continue uninterrupted over the viewpoint switch. According to an embodiment, a player concludes whether audio is common between the viewpoints. For example, when the same audio track is included in a first viewpoint entity group and a second viewpoint entity group, representing viewpoints between which the viewpoint switching takes place, the player may conclude that the audio is common between the viewpoints. Otherwise, the player may conclude that the audio is not common between the viewpoints. When the audio is concluded to be common between the viewpoints, the player continues the audio decoding and playback in an uninterrupted manner. [0159] According to an embodiment, the desired player behavior may be signaled as a transition effect. Especially, along with more detailed player implementation signaling the player behavior as a transition effect may be expected to be applicable to any viewpoint switch.
[0160] According to an embodiment, a signalling of said indication (e.g. the flag) is configured to be carried out by at least one syntax element included in a ViewpointSwitchingListStruct syntax structure or any other suitable syntax structure for ISO/IEC 23090-2 (or similar omnidirectional media coding technology).
[0161 } An example of including said at least one syntax element into
ViewpointSwitchingListStruct syntax element is shown below. aligned(8) class ViewpointSwitchingListStruct() { unsigned int(8) num_viewpoint_switching; for (i = 0; i < num_viewpoint_switching; i++) { unsigned int(32) destination_viewpoint_id; unsigned int(2) viewing_orientation_in_destination_viewport_mode; unsigned int(l) transition_effect_flag; unsigned int(l) timeline_switching_offset_flag; unsigned int(l) viewpoint_switch_region_flag; unsigned int(l) switch_type_flag; bit(2) reserved = 0;
// which viewport to switch to in the destination viewpoint if (viewing_orientation_in_destination_viewport_mode == 1) SphereRegionStruct(0,0);
// definition of the destination viewport as a sphere region if (timeline_switching_offset_flag) ViewpointTimelineSwitchStruct(); if (transition_effect_flag) { unsigned int(8) transition_effect_type; if (transition_effect_type == 4) unsigned int(32) transition_video_track_id; if (transition_effect_type == 5) utf8string transition_video_URL;
} if (viewpoint_switch_region_flag) ViewpointSwitchRegionStruct();
}
} [0162] In the above embodiment, the switch_type_flag is included in the ViewpointSwitchingListStruct ( ) , if the timeline_switch_offset_flag is equal to 0.
[0163 ] According to an embodiment, if the indication indicates that the switch from the first viewpoint representation to the second viewpoint representation is to be started at the next available switch point, a second parameter may be encoded in the metadata for indicating a timeout value for the switch from the first viewpoint representation to the second viewpoint representation to be completed.
[0164] Hence, the the signaling could be enhanced by a timeout value, or as a combination of the indication (e.g. the flag) and an additional parameter for the timeout value. The timeout value may indicate the time period in which the player is expected to complete the viewpoint switch. This may require the player to choose a bandwidth representation of the destination viewport that meets the recent network conditions, if available.
[0165] The timeout value may be incorporated in the WiewpointTimelineSwitchStruct ( ) or ViewpointSwitchingListStruct ( ) in the following manner: unsigned int(l) switch_type_flag; if(!switch_type_flag){ unsigned int(32) switch_duration;
}
[0166] The timeout value is indicated by the parameter switch_duration. If the switch_type_flag is equal to 1, it indicates that the switching duration should be immediate or as fast as possible.
[0167] According to an embodiment, the viewpoint switching implementation for the player utilizes the t_max and t_min in the
ViewpointSwitchingTimelineStruct ( ) to prefetch content. This may be further optimized by taking into account if the user orientation in the current viewpoint is within a predefined threshold or overlapping the viewpoint switch activation region. This will be applicable to non-viewport-locked overlays. Prefetching may be started when the pointer of the HMD is getting close to the switch region. For non-HMD consumption, the region displayed in conventional display is considered the current viewport orientation. The proximity threshold to viewpoint switch activation region may be signaled with the ViewpointSwitchTimelineStruct (). [0168] Another aspect relates to the operation of a player or a decoder upon receiving the above-described indication for controlling the viewpoint switch.
[0169] The operation may include, as shown in Figure 8, receiving (800) at least one bitstream corresponding to at least a first encoded viewpoint representation from mutually related viewpoints comprising said first encoded viewpoint representation and at least a second encoded viewpoint representation of omnidirectional video media content; decoding (802) and rendering said first encoded viewpoint representation for playback; receiving (804), from or along the at least one bitstream, metadata comprising an indication for controlling a switch at least from the first viewpoint representation to the second viewpoint representation; and switching (806), in response to a playback action triggering a viewpoint switch, to decode and render said first encoded viewpoint representation for playback according to said indication.
[0170] The embodiments relating to the encoding aspects may be implemented in an apparatus comprising: means for encoding omnidirectional video media content into at least a first viewpoint representation and a second viewpoint representation, said first and second viewpoint representation being associated with mutually related viewpoints; and means for encoding metadata, in or along a bitstream comprising at the encoded first viewpoint representation, said metadata comprising an indication for controlling a switch at least from the first viewpoint representation to the second viewpoint representation upon playback.
[0171] The embodiments relating to the encoding aspects may likewise be implemented in an apparatus comprising at least one processor and at least one memory, said at least one memory stored with computer program code thereon, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform: encode omnidirectional video media content into at least a first viewpoint representation and a second viewpoint representation, said first and second viewpoint representation being associated with mutually related viewpoints; and encode metadata, in or along a bitstream comprising at the encoded first viewpoint representation, said metadata comprising an indication for controlling a switch at least from the first viewpoint representation to the second viewpoint representation upon playback.
[0172] The embodiments relating to the decoding aspects may be implemented in an apparatus comprising means for receiving at least one bitstream corresponding to at least a first encoded viewpoint representation from mutually related viewpoints comprising said first encoded viewpoint representation and at least a second encoded viewpoint representation of omnidirectional video media content; means for decoding and means for rendering said first encoded viewpoint representation for playback; means for means for receiving, from or along the at least one bitstream, metadata comprising an indication for controlling a switch at least from the first viewpoint representation to the second viewpoint representation; and means for switching, in response to a playback action triggering a viewpoint switch, to decode and render said first encoded viewpoint representation for playback according to said indication. [0173] The embodiments relating to the decoding aspects may likewise be implemented in an apparatus comprising at least one processor and at least one memory, said at least one memory stored with computer program code thereon, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform: receive at least one bitstream corresponding to at least a first encoded viewpoint representation from mutually related viewpoints comprising said first encoded viewpoint representation and at least a second encoded viewpoint representation of omnidirectional video media content; decode and render said first encoded viewpoint representation for playback; receive, from or along the at least one bitstream, metadata comprising an indication for controlling a switch at least from the first viewpoint representation to the second viewpoint representation; and switch, in response to a playback action triggering a viewpoint switch, to decode and render said first encoded viewpoint representation for playback according to said indication.
[0174] Such apparatuses may comprise e.g. the functional units disclosed in any of the Figures 1, 2, 3a and 3b for implementing the embodiments.
[0175] Herein, the decoder should be interpreted to cover any operational unit capable to carry out the decoding operations, such as a player, a receiver, a gateway, a demultiplexer and/or a decoder.
[0176] Figure 9 is a graphical representation of an example multimedia communication system within which various embodiments may be implemented. A data source 1510 provides a source signal in an analog, uncompressed digital, or compressed digital format, or any combination of these formats. An encoder 1520 may include or be connected with a pre processing, such as data format conversion and/or filtering of the source signal. The encoder 1520 encodes the source signal into a coded media bitstream. It should be noted that a bitstream to be decoded may be received directly or indirectly from a remote device located within virtually any type of network. Additionally, the bitstream may be received from local hardware or software. The encoder 1520 may be capable of encoding more than one media type, such as audio and video, or more than one encoder 1520 may be required to code different media types of the source signal. The encoder 1520 may also get synthetically produced input, such as graphics and text, or it may be capable of producing coded bitstreams of synthetic media. In the following, only processing of one coded media bitstream of one media type is considered to simplify the description. It should be noted, however, that typically real-time broadcast services comprise several streams (typically at least one audio, video and text sub-titling stream). It should also be noted that the system may include many encoders, but in the figure only one encoder 1520 is represented to simplify the description without a lack of generality. It should be further understood that, although text and examples contained herein may specifically describe an encoding process, one skilled in the art would understand that the same concepts and principles also apply to the corresponding decoding process and vice versa.
[0177] The coded media bitstream may be transferred to a storage 1530. The storage 1530 may comprise any type of mass memory to store the coded media bitstream. The format of the coded media bitstream in the storage 1530 may be an elementary self-contained bitstream format, or one or more coded media bitstreams may be encapsulated into a container file, or the coded media bitstream may be encapsulated into a Segment format suitable for DASH (or a similar streaming system) and stored as a sequence of Segments. If one or more media bitstreams are encapsulated in a container file, a file generator (not shown in the figure) may be used to store the one more media bitstreams in the file and create file format metadata, which may also be stored in the file. The encoder 1520 or the storage 1530 may comprise the file generator, or the file generator is operationally attached to either the encoder 1520 or the storage 1530. Some systems operate “live”, i.e. omit storage and transfer coded media bitstream from the encoder 1520 directly to the sender 1540. The coded media bitstream may then be transferred to the sender 1540, also referred to as the server, on a need basis. The format used in the transmission may be an elementary self-contained bitstream format, a packet stream format, a Segment format suitable for DASH (or a similar streaming system), or one or more coded media bitstreams may be encapsulated into a container file. The encoder 1520, the storage 1530, and the server 1540 may reside in the same physical device or they may be included in separate devices. The encoder 1520 and server 1540 may operate with live real-time content, in which case the coded media bitstream is typically not stored permanently, but rather buffered for small periods of time in the content encoder 1520 and/or in the server 1540 to smooth out variations in processing delay, transfer delay, and coded media bitrate.
[0178] The server 1540 sends the coded media bitstream using a communication protocol stack. The stack may include but is not limited to one or more of Real-Time Transport Protocol (RTP), User Datagram Protocol (UDP), Hypertext Transfer Protocol (HTTP), Transmission Control Protocol (TCP), and Internet Protocol (IP). When the communication protocol stack is packet-oriented, the server 1540 encapsulates the coded media bitstream into packets. For example, when RTP is used, the server 1540 encapsulates the coded media bitstream into RTP packets according to an RTP payload format. Typically, each media type has a dedicated RTP payload format. It should be again noted that a system may contain more than one server 1540, but for the sake of simplicity, the following description only considers one server 1540.
[0179] If the media content is encapsulated in a container file for the storage 1530 or for inputting the data to the sender 1540, the sender 1540 may comprise or be operationally attached to a "sending file parser" (not shown in the figure). In particular, if the container file is not transmitted as such but at least one of the contained coded media bitstream is encapsulated for transport over a communication protocol, a sending file parser locates appropriate parts of the coded media bitstream to be conveyed over the communication protocol. The sending file parser may also help in creating the correct format for the communication protocol, such as packet headers and payloads. The multimedia container file may contain encapsulation instructions, such as hint tracks in the ISOBMFF, for encapsulation of the at least one of the contained media bitstream on the communication protocol.
[0 ! SO] The server 1540 may or may not be connected to a gateway 1550 through a communication network, which may e.g. be a combination of a CDN, the Internet and/or one or more access networks. The gateway may also or alternatively be referred to as a middle- box. For DASH, the gateway may be an edge server (of a CDN) or a web proxy. It is noted that the system may generally comprise any number gateways or alike, but for the sake of simplicity, the following description only considers one gateway 1550. The gateway 1550 may perform different types of functions, such as translation of a packet stream according to one communication protocol stack to another communication protocol stack, merging and forking of data streams, and manipulation of data stream according to the downlink and/or receiver capabilities, such as controlling the bit rate of the forwarded stream according to prevailing downlink network conditions. The gateway 1550 may be a server entity in various embodiments.
[0181 ] The system includes one or more receivers 1560, typically capable of receiving, de modulating, and de-capsulating the transmitted signal into a coded media bitstream. The coded media bitstream may be transferred to a recording storage 1570. The recording storage 1570 may comprise any type of mass memory to store the coded media bitstream. The recording storage 1570 may alternatively or additively comprise computation memory, such as random access memory. The format of the coded media bitstream in the recording storage 1570 may be an elementary self-contained bitstream format, or one or more coded media bitstreams may be encapsulated into a container file. If there are multiple coded media bitstreams, such as an audio stream and a video stream, associated with each other, a container file is typically used and the receiver 1560 comprises or is attached to a container file generator producing a container file from input streams. Some systems operate “live,” i.e. omit the recording storage 1570 and transfer coded media bitstream from the receiver 1560 directly to the decoder 1580. In some systems, only the most recent part of the recorded stream, e.g., the most recent 10-minute excerption of the recorded stream, is maintained in the recording storage 1570, while any earlier recorded data is discarded from the recording storage 1570.
[0182] The coded media bitstream may be transferred from the recording storage 1570 to the decoder 1580. If there are many coded media bitstreams, such as an audio stream and a video stream, associated with each other and encapsulated into a container file or a single media bitstream is encapsulated in a container file e.g. for easier access, a file parser (not shown in the figure) is used to decapsulate each coded media bitstream from the container file. The recording storage 1570 or a decoder 1580 may comprise the file parser, or the file parser is attached to either recording storage 1570 or the decoder 1580. It should also be noted that the system may include many decoders, but here only one decoder 1570 is discussed to simplify the description without a lack of generality
[0183] The coded media bitstream may be processed further by a decoder 1570, whose output is one or more uncompressed media streams. Finally, a Tenderer 1590 may reproduce the uncompressed media streams with a loudspeaker or a display, for example. The receiver 1560, recording storage 1570, decoder 1570, and Tenderer 1590 may reside in the same physical device or they may be included in separate devices.
[0184] A sender 1540 and/or a gateway 1550 may be configured to perform switching between different representations e.g. for switching between different viewports of 360- degree video content, view switching, bitrate adaptation and/or fast start-up, and/or a sender 1540 and/or a gateway 1550 may be configured to select the transmitted representation(s). Switching between different representations may take place for multiple reasons, such as to respond to requests of the receiver 1560 or prevailing conditions, such as throughput, of the network over which the bitstream is conveyed. In other words, the receiver 1560 may initiate switching between representations. A request from the receiver can be, e.g., a request for a Segment or a Subsegment from a different representation than earlier, a request for a change of transmitted scalability layers and/or sub-layers, or a change of a rendering device having different capabilities compared to the previous one. A request for a Segment may be an HTTP GET request. A request for a Subsegment may be an HTTP GET request with a byte range. Additionally or alternatively, bitrate adjustment or bitrate adaptation may be used for example for providing so-called fast start-up in streaming services, where the bitrate of the transmitted stream is lower than the channel bitrate after starting or random-accessing the streaming in order to start playback immediately and to achieve a buffer occupancy level that tolerates occasional packet delays and/or retransmissions. Bitrate adaptation may include multiple representation or layer up-switching and representation or layer down-switching operations taking place in various orders.
[0185 J A decoder 1580 may be configured to perform switching between different representations e.g. for switching between different viewports of 360-degree video content, viewpoint switching, bitrate adaptation and/or fast start-up, and/or a decoder 1580 may be configured to select the transmitted representation(s). Switching between different representations may take place for multiple reasons, such as to achieve faster decoding operation or to adapt the transmitted bitstream, e.g. in terms of bitrate, to prevailing conditions, such as throughput, of the network over which the bitstream is conveyed. Thus, the decoder may comprise means for requesting at least one decoder reset picture of the second representation for carrying out bitrate adaptation between the first representation and a third representation. Faster decoding operation might be needed for example if the device including the decoder 1580 is multi-tasking and uses computing resources for other purposes than decoding the video bitstream. In another example, faster decoding operation might be needed when content is played back at a faster pace than the normal playback speed, e.g. twice or three times faster than conventional real-time playback rate.
[0186] In the above, some embodiments have been described in relation to omnidirectional video. It needs to be understood that the embodiments are not limited to omnidirectional video but could likewise be applied to other types of video, including conventional two- dimensional video. It needs to be understood that the syntax structures relating to OMAF serve as examples and embodiments could similarly be implemented with other syntax structures for other types of video.
[0187] In the above, some embodiments have been described in relation to ISOBMFF, e.g. when it comes to segment format. It needs to be understood that embodiments could be similarly realized with any other file format, such as Matroska, with similar capability and/or structures as those in ISOBMFF.
[0188] In the above, some embodiments have been described in relation to DASH and DASH MPD, e.g. when it comes to media description or streaming manifest. It needs to be understood that embodiments could be similarly realized with any other media description or streaming manifest format, such as the Internet Engineering Task Force (IETF) Session Description Protocol (SDP), as or within parameters of a media type (a.k.a. Multipurpose Internet Mail Extensions type or MIME type) specified by a the Internet Assigned Numbers Authority (IANA), or using the M3U manifest format specified in IETF RFC 8216.
[0189] In the above, where the example embodiments have been described with reference to an encoder, it needs to be understood that the resulting bitstream and the decoder may have corresponding elements in them. Likewise, where the example embodiments have been described with reference to a decoder, it needs to be understood that the encoder may have structure and/or computer program for generating the bitstream to be decoded by the decoder. [0190] The embodiments of the invention described above describe the codec in terms of separate encoder and decoder apparatus in order to assist the understanding of the processes involved. However, it would be appreciated that the apparatus, structures and operations may be implemented as a single encoder-decoder apparatus/structure/operation. Furthermore, it is possible that the coder and decoder may share some or all common elements.
[0 ! 91] Although the above examples describe embodiments of the invention operating within a codec within an electronic device, it would be appreciated that the invention as defined in the claims may be implemented as part of any video codec. Thus, for example, embodiments of the invention may be implemented in a video codec which may implement video coding over fixed or wired communication paths.
[0192] Thus, user equipment may comprise a video codec such as those described in embodiments of the invention above. It shall be appreciated that the term user equipment is intended to cover any suitable type of wireless user equipment, such as mobile telephones, portable data processing devices or portable web browsers.
[01 3] Furthermore elements of a public land mobile network (PLMN) may also comprise video codecs as described above.
[0194] In general, the various embodiments of the invention may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. For example, some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto. While various aspects of the invention may be illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
[0195j The embodiments of this invention may be implemented by computer software executable by a data processor of the mobile device, such as in the processor entity, or by hardware, or by a combination of software and hardware. Further in this regard it should be noted that any blocks of the logic flow as in the Figures may represent program steps, or interconnected logic circuits, blocks and functions, or a combination of program steps and logic circuits, blocks and functions. The software may be stored on such physical media as memory chips, or memory blocks implemented within the processor, magnetic media such as hard disk or floppy disks, and optical media such as for example DVD and the data variants thereof, CD.
[01 6] The memory may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as semiconductor-based memory devices, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory. The data processors may be of any type suitable to the local technical environment, and may include one or more of general purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs) and processors based on multi-core processor architecture, as non-limiting examples.
[0197] Embodiments of the inventions may be practiced in various components such as integrated circuit modules. The design of integrated circuits is by and large a highly automated process. Complex and powerful software tools are available for converting a logic level design into a semiconductor circuit design ready to be etched and formed on a semiconductor substrate.
[0198] Programs, such as those provided by Synopsys, Inc. of Mountain View, California and Cadence Design, of San Jose, California automatically route conductors and locate components on a semiconductor chip using well established rules of design as well as libraries of pre-stored design modules. Once the design for a semiconductor circuit has been completed, the resultant design, in a standardized electronic format (e.g., Opus, GDSII, or the like) may be transmitted to a semiconductor fabrication facility or "fab" for fabrication. [01 9] The foregoing description has provided by way of exemplary and non- limiting examples a full and informative description of the exemplary embodiment of this invention. However, various modifications and adaptations may become apparent to those skilled in the relevant arts in view of the foregoing description, when read in conjunction with the accompanying drawings and the appended claims. However, all such and similar modifications of the teachings of this invention will still fall within the scope of this invention.

Claims

CLAIMS:
1. A method comprising: encoding omnidirectional video media content into at least a first viewpoint representation and a second viewpoint representation, said first and second viewpoint representation being associated with mutually related viewpoints; and encoding metadata, in or along a bitstream comprising at least the encoded first viewpoint representation, said metadata comprising an indication for controlling a switch at least from the first viewpoint representation to the second viewpoint representation upon playback.
2. An apparatus comprising at least one processor and at least one memory, said at least one memory stored with computer program code thereon, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform: encode omnidirectional video media content into at least a first viewpoint representation and a second viewpoint representation, said first and second viewpoint representation being associated with mutually related viewpoints; and encode metadata, in or along a bitstream comprising at least the encoded first viewpoint representation, said metadata comprising an indication for controlling a switch at least from the first viewpoint representation to the second viewpoint representation upon playback.
3. An apparatus comprising: means for encoding omnidirectional video media content into at least a first viewpoint representation and a second viewpoint representation, said first and second viewpoint representation being associated with mutually related viewpoints; and means for encoding metadata, in or along a bitstream comprising at least the encoded first viewpoint representation, said metadata comprising an indication for controlling a switch at least from the first viewpoint representation to the second viewpoint representation upon playback.
4. The apparatus according to claim 2 or 3, wherein said indication comprises at least one parameter indicating at least one of the following: - in response to a playback action triggering a viewpoint switch, the switch from the first viewpoint representation to the second viewpoint representation is to be started at a next available switch point;
- in response to a playback action triggering a viewpoint switch, the switch from the first viewpoint representation to the second viewpoint representation is to be delayed until the second viewpoint representation has been decoded ready for rendering.
5. The apparatus according to claim 4, wherein said indication is configured to be encoded as a flag indicating whether the switch from the first viewpoint representation to the second viewpoint representation is to be started at the next available switch point or to be delayed until the second viewpoint representation has been decoded ready for rendering.
6. The apparatus according to any of claims 2 - 5, wherein a signalling of said indication is configured to be carried out by at least one syntax element included in a ViewpointTimelineSwitchStruct syntax structure for ISO/IEC 23090-2.
7. The apparatus according to any of claims 2 - 5, wherein a signalling of said indication is configured to be carried out by at least one syntax element included in a ViewpointSwitchingListStruct syntax structure for ISO/IEC 23090-2.
8. The apparatus according to any of claims 4 - 7, wherein the apparatus further comprises means for encoding, responsive to said indication indicating that the switch from the first viewpoint representation to the second viewpoint representation is to be started at the next available switch point, a second parameter for indicating a timeout value for the switch from the first viewpoint representation to the second viewpoint representation to be completed.
9. A method comprising: receiving at least one bitstream corresponding to at least a first encoded viewpoint representation from mutually related viewpoints comprising said first encoded viewpoint representation and at least a second encoded viewpoint representation of omnidirectional video media content; decoding and rendering said first encoded viewpoint representation for playback; receiving, from or along the at least one bitstream, metadata comprising an indication for controlling a switch at least from the first viewpoint representation to the second viewpoint representation; and switching, in response to a playback action triggering a viewpoint switch, to decode and render said first encoded viewpoint representation for playback according to said indication.
10. An apparatus comprising at least one processor and at least one memory, said at least one memory stored with computer program code thereon, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform: receive at least one bitstream corresponding to at least a first encoded viewpoint representation from mutually related viewpoints comprising said first encoded viewpoint representation and at least a second encoded viewpoint representation of omnidirectional video media content; decode and render said first encoded viewpoint representation for playback; receive, from or along the at least one bitstream, metadata comprising an indication for controlling a switch at least from the first viewpoint representation to the second viewpoint representation; and switch, in response to a playback action triggering a viewpoint switch, to decode and render said first encoded viewpoint representation for playback according to said indication.
11. An apparatus comprising: means for receiving at least one bitstream corresponding to at least a first encoded viewpoint representation from mutually related viewpoints comprising said first encoded viewpoint representation and at least a second encoded viewpoint representation of omnidirectional video media content; means for decoding and means for rendering said first encoded viewpoint representation for playback; means for receiving, from or along the at least one bitstream, metadata comprising an indication for controlling a switch at least from the first viewpoint representation to the second viewpoint representation; and means for switching, in response to a playback action triggering a viewpoint switch, to decode and render said first encoded viewpoint representation for playback according to said indication.
12. The apparatus according to claim 10 or 11, wherein said indication is configured to control the switch from the first viewpoint representation to the second viewpoint representation is to be started at a next available switch point or to be delayed until the second viewpoint representation has been decoded ready for rendering.
13. The apparatus according to claim 12, wherein said indication is configured to be decoded from a flag indicating whether the switch from the first viewpoint representation to the second viewpoint representation is to be started at the next available switch point or to be delayed until the second viewpoint representation has been decoded ready for rendering.
14. The apparatus according to any of claims 10 - 13, wherein a signalling of said indication is configured to be decoded from at least one syntax element included in a ViewpointTimelineSwitchStruct syntax structure for ISO/IEC 23090-2.
15. The apparatus according to any of claims 10 - 13, wherein a signalling of said indication is configured to be decoded from at least one syntax element included in a ViewpointSwitchingListStruct syntax structure for ISO/IEC 23090-2.
EP21779094.8A 2020-04-02 2021-03-17 An apparatus, a method and a computer program for video coding and decoding Pending EP4128808A4 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FI20205338 2020-04-02
PCT/FI2021/050192 WO2021198553A1 (en) 2020-04-02 2021-03-17 An apparatus, a method and a computer program for video coding and decoding

Publications (2)

Publication Number Publication Date
EP4128808A1 true EP4128808A1 (en) 2023-02-08
EP4128808A4 EP4128808A4 (en) 2024-05-15

Family

ID=77927944

Family Applications (1)

Application Number Title Priority Date Filing Date
EP21779094.8A Pending EP4128808A4 (en) 2020-04-02 2021-03-17 An apparatus, a method and a computer program for video coding and decoding

Country Status (2)

Country Link
EP (1) EP4128808A4 (en)
WO (1) WO2021198553A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112423108B (en) * 2019-08-20 2023-06-30 中兴通讯股份有限公司 Method and device for processing code stream, first terminal, second terminal and storage medium
CN113949829B (en) * 2021-10-15 2022-09-20 腾讯科技(深圳)有限公司 Media file encapsulation and decapsulation method, device, equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019194434A1 (en) * 2018-04-05 2019-10-10 엘지전자 주식회사 Method and device for transceiving metadata for plurality of viewpoints
CN112237005B (en) * 2018-04-05 2023-11-07 Vid拓展公司 Viewpoint metadata for omni-directional video
WO2019200227A1 (en) * 2018-04-13 2019-10-17 Futurewei Technologies, Inc. Signaling spatial region correspondence between virtual reality viewpoints
SG11202110312XA (en) * 2019-03-20 2021-10-28 Beijing Xiaomi Mobile Software Co Ltd Method and device for transmitting viewpoint switching capabilities in a vr360 application

Also Published As

Publication number Publication date
WO2021198553A1 (en) 2021-10-07
EP4128808A4 (en) 2024-05-15

Similar Documents

Publication Publication Date Title
RU2728904C1 (en) Method and device for controlled selection of point of view and orientation of audiovisual content
KR102246002B1 (en) Method, device, and computer program to improve streaming of virtual reality media content
US11094130B2 (en) Method, an apparatus and a computer program product for video encoding and video decoding
US11943421B2 (en) Method, an apparatus and a computer program product for virtual reality
TW201841512A (en) Signaling important video information in network video streaming using mime type parameters
AU2017271981A1 (en) Advanced signaling of a most-interested region in an image
KR20190009290A (en) The area of most interest in the image
RU2767300C2 (en) High-level transmission of service signals for video data of &#34;fisheye&#34; type
US11805303B2 (en) Method and apparatus for storage and signaling of media segment sizes and priority ranks
EP3777137B1 (en) Method and apparatus for signaling of viewing extents and viewing space for omnidirectional content
WO2020188142A1 (en) Method and apparatus for grouping entities in media content
EP4128808A1 (en) An apparatus, a method and a computer program for video coding and decoding
US11722751B2 (en) Method, an apparatus and a computer program product for video encoding and video decoding
US12015805B2 (en) Method, an apparatus and a computer program product for video streaming
EP3777219B1 (en) Method and apparatus for signaling and storage of multiple viewpoints for omnidirectional audiovisual content

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20221102

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20240411

RIC1 Information provided on ipc code assigned before grant

Ipc: H04N 19/597 20140101ALI20240405BHEP

Ipc: H04N 21/6373 20110101ALI20240405BHEP

Ipc: H04N 13/106 20180101ALI20240405BHEP

Ipc: H04N 13/178 20180101ALI20240405BHEP

Ipc: H04N 21/235 20110101ALI20240405BHEP

Ipc: H04N 21/4728 20110101ALI20240405BHEP

Ipc: H04N 21/6587 20110101AFI20240405BHEP