WO2017196582A1 - System and method for dynamically stitching video streams - Google Patents
System and method for dynamically stitching video streams Download PDFInfo
- Publication number
- WO2017196582A1 WO2017196582A1 PCT/US2017/030503 US2017030503W WO2017196582A1 WO 2017196582 A1 WO2017196582 A1 WO 2017196582A1 US 2017030503 W US2017030503 W US 2017030503W WO 2017196582 A1 WO2017196582 A1 WO 2017196582A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- encoded
- video frames
- frames
- stitching
- encoded video
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/172—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/162—User input
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/182—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/44—Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
Definitions
- the present disclosure relates generally to video processing and more particularly to video decoding.
- Video encoders and decoders are used in a wide variety of applications to facilitate the storage and transfer of video streams in a compressed fashion.
- a video stream can be encoded prior to being stored at a memory in order to reduce the amount of space required to store the video stream, then later decoded in order to generate frames for display at a display device.
- the decoder prior to decoding a video stream the decoder must be initialized in order to prepare memory and other system resources for the decoding process.
- the overhead required to initialize the decoder can significantly impact the efficiency of the decoding process, especially in applications that require decoding of many different video streams.
- FIG. 1 is a block diagram of a video codec configured to stitch together encoded video frames to generate a stitched encoded frame for decoding in accordance with some embodiments.
- FIG. 2 is a block diagram of an example of the video codec of FIG. 1 stitching a set of encoded video frames to generate a stitched encoded frame in accordance with some embodiments.
- FIG. 3 is a block diagram of an example of the video codec of FIG. 1 selecting and stitching different sets of encoded video frames to generate different stitched encoded frames in accordance with some embodiments.
- FIG. 4 is a block diagram of an example of the video codec of FIG. 1 selecting and stitching different sets of encoded video frames to generate different stitched encoded frames comprised of overlapping encoded video frames in accordance with some embodiments.
- FIG. 5 is a block diagram of an example of the video codec of FIG. 1 modifying a header of an encoded video frame to determine the order in which it will be stitched into a stitched encoded frame and generate other video headers in accordance with some embodiments.
- FIG. 6 is a flow chart of a method of stitching together encoded video frames to generate a stitched encoded frame for decoding in accordance with some
- FIGs. 1 -6 illustrate techniques for reducing initialization overhead at a video codec by stitching independently encoded video frames to generate stitched encoded frames for decoding.
- the video codec includes a stitching module configured to select stored encoded video frames that are to be composed into a concatenated frame for display.
- the stitching module arranges the selected encoded video frames into a specified pattern, and stitches the arranged encoded video frames together to generate a stitched encoded frame.
- a decoder of the video codec then decodes the stitched encoded frame to generate the frame for display.
- the decoder in order to decode a video frame the decoder must be initialized by, for example allocating memory for decoding, preparing buffers and other storage elements, flushing data stored during previous decoding operations, and the like.
- the amount of overhead required to initialize the decoder (referred to herein as the "initialization overhead") is typically independent of the size of the video to be decoded. Accordingly, for some types of devices that generate display frames composed from many independent video streams, the initialization overhead can have a significant impact on codec resources and performance. This is particularly the case where the video streams are relatively small in resolution. For example, in casino gaming and Pachinko/Pachislot devices, each display frame is composed of many independent video streams, where the video streams to be displayed can change frequently over time.
- the different independent video streams are encoded and decoded independently, and then composed into the frame for display.
- This approach requires the decoder to be re-initialized for each independent video stream and at every frame level, such that the initialization overhead consumes an undesirable amount of system resources.
- a video codec can dynamically stitch selected multiple encoded frames into a single stitched encoded frame for decoding. This supports decoding a large number of possible combinations of a large number of possible video frames without requiring excessive memory or decoder initialization overhead. Further, by dynamically stitching selected encoded frames into stitched frames for decoding, the number of initializations of the decoder is reduced.
- FIG. 1 illustrates an example of a video codec 100 configured to encode and decode video streams to generate frames for display at an electronic device in accordance with some embodiments.
- the video codec 100 can be employed in any of a variety of devices, such as a personal computer, mobile device such as a smartphone, a video player, a video game console, a casino gaming device and the like.
- the video streams encoded by the video codec 100 are comprised of a plurality of images or pictures for display at a display device 1 19. Because the large amount of information stored in each video stream can require considerable computing resources such as processing power and memory, the video codec 100 is employed to encode or compress the information in the video streams without unduly diminishing image quality. Prior to display, the video codec decodes so that the uncompressed images in the video streams can be displayed at a display device.
- the video codec 100 comprises an encoder 105, a memory 107, an input/output module 108, a stitching module 1 10, a decoder 1 15, a destitching module 1 17, and a display device 1 19.
- the encoder 105 is configured to receive video streams (VS), including VS1 1 1 1 , VS2 1 12 through an Nth video stream VSN 1 13.
- the encoder 105 is further configured to encode each received video stream to generate a corresponding stream of encoded frames (e.g., stream of encoded frames (EF) 1 19 corresponding to VS1 1 1 1 ).
- Each of the video streams 1 1 1 -1 13 represents a different sequence of video frames, and can therefore represent any of a variety of video content items.
- each video stream represents an animation of a gaming element of a casino game, such as video slot machine or pachinko machine.
- each video stream represents a different television program, movie, or other video entertainment content.
- the encoder 105 is configured to encode each received video stream 1 1 1 -1 13 according to one of any of a number of compression or encoding formats or standards, such as Motion Picture Expert Group (MPEG)-2 Part 2, MPEG-4 Part 2, H.264, H.265 (HEVC), Theora, Dirac, RealVideo RV40, VP8, or VP9 encoding formats, to generate a corresponding encoded video stream.
- MPEG Motion Picture Expert Group
- HEVC H.264, H.265
- Theora Dirac
- RealVideo RV40, VP8, or VP9 encoding formats to generate a corresponding encoded video stream.
- the encoder outputs the corresponding encoded video frames EF1 , EF2 ... EFN to memory 107.
- These encoded frames are comprised of encoded macroblocks or coding tree units (CTUs).
- Memory 107 is a storage medium generally configured to receive the encoded video frames EF1 , EF2 ... EFN from encoder 105 and store them for retrieval by stitching module 1 10.
- memory 107 may include any storage medium, or combination of storage media, accessible by a computer system.
- Such storage media can include, but is not limited to, optical media (e.g., compact disc (CD), digital versatile disc (DVD), Blu-Ray disc), magnetic media (e.g., floppy disc, magnetic tape, or magnetic hard drive), volatile memory (e.g., random access memory (RAM) or cache), non-volatile memory (e.g., read-only memory (ROM) or Flash memory), or microelectromechanical systems (MEMS)-based storage media.
- optical media e.g., compact disc (CD), digital versatile disc (DVD), Blu-Ray disc
- magnetic media e.g., floppy disc, magnetic tape, or magnetic hard drive
- volatile memory e.g., random access memory (RAM) or cache
- Memory 107 may be embedded in the computing system (e.g., system RAM or ROM), fixedly attached to the computing system (e.g., a magnetic hard drive), removably attached to the computing system (e.g., an optical disc or Universal Serial Bus (USB)-based Flash memory), or coupled to the computer system via a wired or wireless network (e.g., network accessible storage (NAS)).
- Input/output module 108 is generally configured to generate electrical signals representing a user's interaction with an input device (not shown), such as a touchscreen, keyboard, a set of buttons or other input, game controller, computer mouse, trackball, pointing device, paddle, knob, eye gaze tracker, digital camera, microphone, joystick and the like.
- the selection may be a direct selection, whereby the user selects particular video streams for display.
- the user may employ a mouse or television remote control to select an arrangement of video clips to be simultaneously displayed.
- the selection can be an indirect selection, such as a random selection of video frames generated in response to a user input.
- the selection can be a random selection of video streams generated in response to a user pressing a "spin" button at a casino gaming machine.
- the input/output module 108 Based on the user selection, the input/output module 108 generates stitching sequence instruction 109 to indicate both the individual video streams to be displayed, and the arrangement of the video streams as they are to be displayed.
- the input/output module 108 may be programmed to generate, based on received user inputs, stitching sequence instructions 109 that delineate a random or pseudo-random selection of video streams and the arrangement of the video streams as they are to be displayed.
- the input/output module 108 may generate stitching instruction 109 directing the selection of encoded frames EF2, EF4 (not shown), EF5 (not shown), and EF8 (not shown), and the arrangement of the corresponding video stream in a one-dimensional stack, with the video stream represented by encoded frame EF2 to be displayed at the top of the stack, the video stream represented by encoded frame EF4 to be displayed below encoded frame EF2, the video stream represented by encoded frame EF5 to be displayed below encoded frame EF4, and the video stream represented by encoded frame EF8 to be displayed below encoded frame EF5, at the bottom of the stack.
- the input/output module 108 can change the stitching instructions 109 to reflect new user input and new corresponding selections and arrangements of video frames to be displayed. For example, in the case of a casino gaming machine, for each user input representing a spin or other game event, the input/output module 108 can generate new stitching instructions 109, thereby generating new selections and arrangements of the video frames according to the rules of the casino game.
- the stitching module 1 10 is configured to receive the stitching sequence instructions 109 and, in accordance with the stitching sequence instructions 109, selects encoded frames stored in memory 107 and stitches the encoded frames to generate a stitched encoded frame 1 18 for output to decoder 1 15.
- stitching module 1 10 stitches the selected encoded frames by modifying the pixel block headers (e.g., macroblock or CTU headers) of the selected encoded frames, such as by modifying a sequence number of the pixel block header that indicates the location of the corresponding pixel block in the frame to be displayed.
- the decoder 1 15 is generally configured to decode the stitched encoded frame
- Decoder 1 18 decodes the stitched encoded frame 1 18 according to any of a number of
- the decoder 1 15 then provides the decoded frame 1 16 to the destitching module 1 17. Because the decoded frame 1 16 is generated based on the stitched encoded frame 1 18, it corresponds to a frame that would be generated if each of the individual displayed video streams were composited prior to encoding. However, by stitching together the video streams in their encoded form, the video codec 100 supports a wide variety of video stream selection and arrangement combinations while reducing setup overhead.
- Destitching module 1 17 is generally configured to receive de-stitching instructions (not shown) and, in accordance with the de-stitching instructions, de- stitch the stitched decoded frame 1 16 to generate decoded (uncompressed) video streams corresponding to VS1 1 1 1 , VS2 1 12, ... VSN 1 13.
- the destitching module 1 17 outputs the decoded video streams (not shown), which are composed by the display device 1 19 to generate a display frame for display on the display device 1 19.
- encoder 105 receives video streams 1 1 1 -1 13, encodes each received stream to generate corresponding encoded video frames, and stores the encoded video frames at the memory 107.
- encoder 105 receives video streams 1 1 1 -1 13, encodes each received stream to generate corresponding encoded video frames, and stores the encoded video frames at the memory 107.
- the encoding of the video streams into the encoded video frames is done prior to general operation of a device that employs the video codec 100.
- the encoded video frames may be generated by the encoder 105 during a manufacturing or provisioning stage of the device employing the video codec 100 so that the encoded video frames are ready during general operation of the device by a user.
- stitching module 1 10 selects encoded frames stored in memory 107 and stitches the encoded frames to generate a stitched encoded frame 1 18 for output to decoder 1 15.
- the decoder decodes received stitched encoded frame 1 18 to generate stitched decoded frame 1 16 for output to destitching module 1 17.
- Destitching module 1 17 destitches received stitched decoded frame 1 16 to generate decoded video streams for output to display device 1 19, which displays the video streams to the user.
- FIG. 2 illustrates an example of the video codec 100 generating a stitched encoded frame 212 in accordance with some embodiments.
- encoded video frames EF1 , EF2, EF3, EF4, EF5, EF6, EF7 and EF8 are stored in memory 207 (not shown).
- Stitching module 1 10 receives stitching sequence instruction 209.
- the stitching sequence instruction 209 indicates that the encoded video frames EF1 , EF2, EF3, and EF5 are to be arranged in a one-dimensional stack, with EF3 in at the top of the stack, EF1 below EF3, EF5 below EF1 , and EF2 below EF5, at the bottom of the stack.
- stitching module 1 10 retrieves encoded video frames EF1 , EF2, EF3 and EF5 from the memory 207, and stitches them into a stitched encoded video frame 212 having four vertically- stacked encoded video frames, with EF3 at the top of the stack, EF1 below EF3, EF5 below EF1 , and EF2 below EF5, at the bottom of the stack.
- the stitching module 1 10 thus matches the selection and arrangement indicated by the stitching sequence 109.
- the stitching module 1 10 arranges the selected encoded frames according to the instructed arrangement by modifying one or more pixel block headers of the encoded frames, thereby modifying the location of the corresponding pixel blocks in the frame. An example is described further below with respect to FIG. 5.
- the stitching sequence instruction received by the stitching module 1 10 can change over time in response to user inputs, thereby generating different stitched arrangements of encoded video frames into different encoded stitched frames at different times.
- An example is illustrated at FIG. 3 in accordance with some embodiments.
- encoded video frames EF1 , EF2, EF3, EF4, EF5, EF6, EF7 and EF8 are stored in memory 107.
- stitching module 1 10 receives a stitching sequence instruction (not illustrated) indicating that the encoded video frames EF1 , EF2, EF3, and EF5 are to be arranged in a stack having four frames, with EF3 at the top of the stack, EF1 below EF3, EF5 below EF1 , and EF2 below EF5, at the bottom of the stack.
- a stitching sequence instruction (not illustrated) indicating that the encoded video frames EF1 , EF2, EF3, and EF5 are to be arranged in a stack having four frames, with EF3 at the top of the stack, EF1 below EF3, EF5 below EF1 , and EF2 below EF5, at the bottom of the stack.
- stitching module 1 10 retrieves encoded video frames EF1 , EF2, EF3, and EF5, and stitches them into an encoded video frame 312 having four stacked frames, with EF3 at the top of the stack, EF1 below EF3, EF5 below EF1 , and EF2 below EF5, at the bottom of the stack.
- stitching module 1 10 receives a new stitching sequence instruction (not shown) indicating that the encoded video frames EF4, EF6, EF7, and EF8 are to be arranged in a stack having four frames, with EF6 at the top of the stack, EF4 below EF6, EF7 below EF4, and EF8 below EF7, at the bottom of the stack.
- stitching module 1 10 retrieves encoded video frames EF4, EF6, EF7, and EF8, and stitches them into an encoded video frame 313 having four vertically-stacked frames, with EF6 at the top of the stack, EF4 below EF6, EF7 below EF4, and EF8 below EF7, at the bottom of the stack.
- the stitching module 103 updates the selection and arrangement of encoded video frames in response to changes in the stitching sequence instruction, thereby changing the arrangement of video streams displayed at the display device 1 17.
- FIG. 4 illustrates an example of the video codec of FIG. 1 selecting and stitching different sets of encoded video frames to generate different stitched encoded frames comprised of overlapping sets of encoded video frames in accordance with some embodiments.
- encoded video frames EF1 , EF2, EF3, EF4, EF5, EF6, EF7 and EF8 are stored in memory 107.
- stitching module 1 10 receives stitching sequence instruction (not shown).
- stitching module 1 10 retrieves encoded video frames EF1 , EF2, EF3, and EF5, and stitches them into an encoded video frame 412 having four stacked frames, with EF3 at the top of the stack, EF1 below EF3, EF5 below EF1 , and EF2 below EF5, at the bottom of the stack.
- stitching module 1 10 receives a new stitching sequence instruction (not shown).
- stitching module 1 10 retrieves encoded video frames EF2, EF3, EF4, and EF8, and stitches them into an encoded video frame 413 having four stacked frames, with EF3 at the top of the stack, EF4 below EF3, EF8 below EF4, and EF2 below EF8, at the bottom of the stack.
- FIG. 5 illustrates an example of the video codec of FIG. 1 modifying a pixel block identifier (e.g., a macroblock or CTU header) of an encoded video frame to determine the order in which it will be stitched into a stitched encoded frame 559 in accordance with some embodiments.
- the memory 107 stores encoded video frames such as encoded video frame 551 .
- Each encoded video frame is comprised of at least a header and a payload (e.g. , header 552 and payload 553 for encoded frame 551 ).
- the header includes address information for the specified pixel block of the encoded video frame. For example, the address of the first pixel block, located in the upper left corner of the pixel block can be designated 0.
- the stitching module 1 10 changes the positions of the pixel blocks in the stitched frame. For example, by changing the address in the pixel block header from 0 to 2, the stitching module 1 10 shifts the pixel block two positions down, assuming four pixel blocks per stitched encoded frame, and a stitched encoded frame having four stacked pixel blocks. Changing the address in the pixel block header from 0 to 8 shifts the pixel block eight positions down.
- stitching sequence instruction indicates that encoded video frame EF3 is to be stitched into the top of the stitched encoded frame 559, encoded video frame EF1 is to be stitched below encoded video frame EF3, encoded video frame EF5 is to be stitched below encoded video frame EF1 , and encoded video frame EF2 is to be stitched below encoded video frame EF5, at the bottom of the stitched encoded frame 559.
- stitching module 1 10 modifies the pixel block header address of encoded video frame EF3 from 0 to N1 ; modifies the pixel block address of encoded video frame EF1 from 0 to N2, the pixel block address of encoded video frame EF5 from 0 to N3 and the address of encoded video frame EF2 from 0 to N4.
- address N4 is shifted more than N3 which is shifted more than N3 which is shifted more than N1 , in order to achieve the arrangement shown in stitched encoded frame 559.
- Persons of skill can appreciate that other relative shifts in addresses can be used in other implementations to achieve the same ordering.
- the stitching module 1 10 thus changes the relative position of the pixel blocks of each encoded video frame for the stitched encoded frame 559, thereby logically stitching the encoded frames into the stitched encoded frame 559.
- the stitching module 1 10 changes the pixel block headers without changing the number of bits that store the address information.
- FIG. 6 illustrates a method 600 of stitching together encoded video frames to generate a stitched encoded frame for decoding in accordance with some embodiments.
- the stitching module 1 10 receives stitching sequence instruction 109 from the input/output module.
- the stitching module 1 10 retrieves selected encoded video frames from memory according to the received stitching instruction 109.
- the stitching module modifies the pixel block header addresses of the selected encoded video frames according to the received stitching instruction 109.
- the stitching module 1 10 stitches the encoded video frames according to the received stitching instruction 109 to generate a stitched encoded frame.
- the stitching module 1 10 outputs the stitched encoded frame 1 17 to the decoder 1 15.
- certain aspects of the techniques described above may be implemented by one or more processors of a processing system executing software.
- the software comprises one or more sets of executable instructions stored or otherwise tangibly embodied on a non-transitory computer readable storage medium.
- the software can include the instructions and certain data that, when executed by the one or more processors, manipulate the one or more processors to perform one or more aspects of the techniques described above.
- the non-transitory computer readable storage medium can include, for example, a magnetic or optical disk storage device, solid state storage devices such as Flash memory, a cache, random access memory (RAM) or other non-volatile memory device or devices, and the like.
- the executable instructions stored on the non-transitory computer readable storage medium may be in source code, assembly language code, object code, or other instruction format that is interpreted or otherwise executable by one or more processors.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
Claims
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201780028608.7A CN109565598A (en) | 2016-05-11 | 2017-05-02 | System and method for dynamically splicing video flowing |
KR1020187032651A KR20180137510A (en) | 2016-05-11 | 2017-05-02 | System and method for dynamically stitching video streams |
JP2018558425A JP2019515578A (en) | 2016-05-11 | 2017-05-02 | System and method for dynamically stitching video streams |
EP17796574.6A EP3456048A4 (en) | 2016-05-11 | 2017-05-02 | System and method for dynamically stitching video streams |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
IN201641016496 | 2016-05-11 | ||
IN201641016496 | 2016-05-11 | ||
US15/170,103 | 2016-06-01 | ||
US15/170,103 US20170332096A1 (en) | 2016-05-11 | 2016-06-01 | System and method for dynamically stitching video streams |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017196582A1 true WO2017196582A1 (en) | 2017-11-16 |
Family
ID=60267476
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2017/030503 WO2017196582A1 (en) | 2016-05-11 | 2017-05-02 | System and method for dynamically stitching video streams |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2017196582A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116954541A (en) * | 2023-09-18 | 2023-10-27 | 广东保伦电子股份有限公司 | Video cutting method and system for spliced screen |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050008240A1 (en) * | 2003-05-02 | 2005-01-13 | Ashish Banerji | Stitching of video for continuous presence multipoint video conferencing |
US20060120464A1 (en) * | 2002-01-23 | 2006-06-08 | Nokia Corporation | Grouping of image frames in video coding |
WO2007019409A2 (en) * | 2005-08-04 | 2007-02-15 | Microsoft Corporation | Video registration and image sequence stitching |
US20080089405A1 (en) * | 2004-10-12 | 2008-04-17 | Suk Hee Cho | Method and Apparatus for Encoding and Decoding Multi-View Video Using Image Stitching |
US20120224641A1 (en) * | 2003-11-18 | 2012-09-06 | Visible World, Inc. | System and Method for Optimized Encoding and Transmission of a Plurality of Substantially Similar Video Fragments |
-
2017
- 2017-05-02 WO PCT/US2017/030503 patent/WO2017196582A1/en unknown
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060120464A1 (en) * | 2002-01-23 | 2006-06-08 | Nokia Corporation | Grouping of image frames in video coding |
US20050008240A1 (en) * | 2003-05-02 | 2005-01-13 | Ashish Banerji | Stitching of video for continuous presence multipoint video conferencing |
US20120224641A1 (en) * | 2003-11-18 | 2012-09-06 | Visible World, Inc. | System and Method for Optimized Encoding and Transmission of a Plurality of Substantially Similar Video Fragments |
US20080089405A1 (en) * | 2004-10-12 | 2008-04-17 | Suk Hee Cho | Method and Apparatus for Encoding and Decoding Multi-View Video Using Image Stitching |
WO2007019409A2 (en) * | 2005-08-04 | 2007-02-15 | Microsoft Corporation | Video registration and image sequence stitching |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116954541A (en) * | 2023-09-18 | 2023-10-27 | 广东保伦电子股份有限公司 | Video cutting method and system for spliced screen |
CN116954541B (en) * | 2023-09-18 | 2024-02-09 | 广东保伦电子股份有限公司 | Video cutting method and system for spliced screen |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10542301B2 (en) | Multimedia redirection method, device, and system | |
US20190306515A1 (en) | Coding apparatus, coding method, decoding apparatus, and decoding method | |
EP3456048A1 (en) | System and method for dynamically stitching video streams | |
US8223849B2 (en) | Picture decoder, reference picture information communication interface, and reference picture control method | |
CN100508585C (en) | Apparatus and method for controlling reverse-play for digital video bit stream | |
CN108984137A (en) | Double-screen display method and its system, computer readable storage medium | |
CN110012290A (en) | The segment of image is ranked up for encoding and being sent to display equipment | |
US20140146869A1 (en) | Sub picture parallel transcoding | |
KR20100071865A (en) | Method for constructing and decoding a video frame in a video signal processing apparatus using multi-core processor and apparatus thereof | |
TW201143443A (en) | Method and system for 3D video decoding using a tier system framework | |
JP6107970B2 (en) | JCTVC-L0227: VPS_EXTENSION with profile-hierarchy-level syntax structure update | |
EP3243329B1 (en) | Image decoding apparatus, image decoding method, and storage medium | |
JP2001177802A (en) | Image sequence coding method, sub-picture unit used for electronic device, and data storage medium | |
KR102490112B1 (en) | Method for Processing Bitstream Generated by Encoding Video Data | |
CN105681893A (en) | Method and device for decoding stream media video data | |
WO2017196582A1 (en) | System and method for dynamically stitching video streams | |
TWI316812B (en) | ||
KR101693416B1 (en) | Method for image encoding and image decoding, and apparatus for image encoding and image decoding | |
US20200137134A1 (en) | Multi-session low latency encoding | |
KR102499900B1 (en) | Image processing device and image playing device for high resolution image streaming and operaing method of thereof | |
JP2014110452A (en) | Image decoding device and image encoding device | |
JP2002171523A (en) | Image decoder, image decoding method, and program storage medium | |
CN111355981B (en) | Video data playing method and device, storage medium and electronic equipment | |
CN114025162B (en) | Entropy decoding method, medium, program product, and electronic device | |
JP2013017057A (en) | Image processing device, image data generation device, image processing method, image data generation method, and data structure of image file |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2018558425 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 20187032651 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17796574 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2017796574 Country of ref document: EP Effective date: 20181211 |