Nothing Special   »   [go: up one dir, main page]

CN112911346A - Video source synchronization method and device - Google Patents

Video source synchronization method and device Download PDF

Info

Publication number
CN112911346A
CN112911346A CN202110114720.0A CN202110114720A CN112911346A CN 112911346 A CN112911346 A CN 112911346A CN 202110114720 A CN202110114720 A CN 202110114720A CN 112911346 A CN112911346 A CN 112911346A
Authority
CN
China
Prior art keywords
node
video
frame number
code stream
decoding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110114720.0A
Other languages
Chinese (zh)
Inventor
刘尧
张亚南
李健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Tricolor Technology Co ltd
Original Assignee
Beijing Tricolor Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Tricolor Technology Co ltd filed Critical Beijing Tricolor Technology Co ltd
Priority to CN202110114720.0A priority Critical patent/CN112911346A/en
Publication of CN112911346A publication Critical patent/CN112911346A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/242Synchronization processes, e.g. processing of PCR [Program Clock References]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/643Communication protocols
    • H04N21/6437Real-time Transport Protocol [RTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The application provides a video source synchronization method and a video source synchronization device, which are applied to coding nodes and comprise the following steps: acquiring a segmentation video; when the coding node is a master node, determining master node video frame numbers of the segmented videos, and sending source synchronization information to a slave node, wherein the master node video frame numbers are used as synchronization frame numbers of the master node; or when the coding node is a slave node, determining a slave node video frame number of the segmented video, and determining a synchronous frame number of the slave node based on the master node video frame number and the slave node video frame number in the source synchronization information; and packing the code stream after the segmented video is coded according to the synchronous frame number and sending the code stream to a decoding node corresponding to the coding node so that the decoding node decodes the code stream and splices the segmented video. The problem that video output is not synchronous when the existing encoder cannot encode high-resolution videos can be solved.

Description

Video source synchronization method and device
Technical Field
The application relates to the field of display control, in particular to a video source synchronization method and device.
Background
In the field of display control, video signals generally need to be compressed and encoded and then transmitted, and with the increasing resolution of video at present, when encoding high-resolution video (generally, more than 2K, such as 4K and 8K videos), a plurality of encoders are needed to encode the same video signal source. In the prior art, due to differences in software and hardware processing performances, when multiple encoders perform h26x encoding on the same video signal source, the phenomenon that encoding of the multiple encoders is asynchronous, so that video output is asynchronous easily occurs, and the user watching experience is poor.
Disclosure of Invention
In order to solve the above problem, an embodiment of the present application provides a video source synchronization method and apparatus.
In a first aspect, an embodiment of the present application provides a video source synchronization method, applied to an encoding node, including:
acquiring a segmentation video;
when the coding node is a master node, determining master node video frame numbers of the segmented videos, and sending source synchronization information to a slave node, wherein the master node video frame numbers are used as synchronization frame numbers of the master node; or when the coding node is a slave node, determining a slave node video frame number of the segmented video, and determining a synchronous frame number of the slave node based on the master node video frame number and the slave node video frame number in the source synchronization information;
and packing the code stream after the segmented video is coded according to the synchronous frame number and sending the code stream to a decoding node corresponding to the coding node so that the decoding node decodes the code stream and splices the segmented video.
In the implementation process, the source synchronization information can be sent through the master node, the slave node determines the synchronization frame number through the source synchronization information and the slave node video frame number, and the segmented video is encoded, decoded and spliced based on the synchronization frame number, so that different decoding nodes splice different segmented videos in a frame alignment manner, and the problem of asynchronous output of the high-resolution video existing at present can be solved.
Optionally, when the coding node is a slave node, determining a slave node video frame number of the segmented video, and determining a synchronization frame number based on the master node video frame number and the slave node video frame number in the source synchronization information includes:
when the slave node receives the source synchronization information before acquiring the slave node video frame number, taking the master node video frame number as the slave node synchronization frame number;
when the slave node receives the source synchronization information after acquiring the slave node video frame number, taking the slave node video frame number as the slave node synchronization frame number.
In the implementation process, the master node video frame number can be acquired by the slave node and is compared with the slave node video frame number to obtain the synchronous frame number, so that the master node and the slave node can send the video code stream to the decoding node according to the synchronous frame number, and the frame synchronization among the nodes is realized.
Optionally, when the coding node is a master node, determining a master node video frame number of the segmented video, and sending source synchronization information to a slave node includes:
and after the master node collects the master node video frame number and passes a preset time threshold, sending the source synchronization information to the slave node.
In the implementation process, redundancy can be increased through a preset time threshold, and time deviation is prevented when video frames are collected.
Optionally, the packing the code stream after the split video coding according to the synchronization frame number and sending the code stream to the decoding node corresponding to the coding node includes:
acquiring the code stream by frames according to the synchronous frame number;
packing the code stream based on a real-time transmission protocol to obtain a code stream transmission packet, wherein the code stream transmission packet comprises a real-time transmission protocol header positioned in front of the code stream, and the real-time transmission protocol header comprises a timestamp;
and sending the code stream transmission packet to the decoding node corresponding to the coding node.
In the implementation process, the code stream is packaged through the real-time transmission protocol, and the time stamp is added in the header of the video code stream, so that the receiver can control the display of the video according to the time stamp.
Optionally, when the coding node is a slave node, determining a slave node video frame number of the segmented video, and determining a synchronization frame number based on the master node video frame number and the slave node video frame number in the source synchronization information includes:
and when the coding node is a slave node and the master node belong to the same group, determining the slave node video frame number of the segmented video, and determining the synchronous frame number based on the master node video frame number and the slave node video frame number in the source synchronization information, wherein each group comprises a master node and at least one slave node.
In the implementation process, the coding nodes are grouped, source synchronization can be carried out on the segmented videos of the video source through the coding nodes in the same group, the coding nodes in the same group synchronize the segmented videos of the same video source, and only the segmented videos of the same video source can be subjected to source synchronization, so that the transmission efficiency and the source synchronization efficiency can be improved.
Optionally, when the resolution of the video is lower than a preset threshold, multiple encoding nodes correspond to one or more decoding nodes, and the packing and sending the code stream after the coding of the segmented video to the decoding nodes corresponding to the encoding nodes according to the synchronization frame number makes the decoding nodes decode the code stream include:
and the coding node sends the code stream to one or more corresponding decoding nodes so that the decoding nodes perform multi-path decoding on the code stream.
In the implementation process, when the resolution of the transmitted video is lower than a preset threshold, the code stream of the video is sent to one or more decoding nodes through the plurality of coding nodes, the code stream can be subjected to multi-path decoding through the decoding nodes, the resource consumption is reduced, and the decoding efficiency is improved.
In a second aspect, an embodiment of the present application provides another video source synchronization method, applied to a decoding node, including:
receiving a code stream sent from a coding node;
and decoding the code stream and splicing the segmented video.
In the implementation process, the video source synchronization method applied to the decoding node can be matched through the steps, the code stream sent from the coding node is obtained through the decoding node, and the segmented video aligned with the frame is decoded and spliced, so that the problem that the high-resolution video output is not synchronous at present can be solved.
Optionally, the decoding the code stream and splicing the segmented video includes:
extracting a real-time transport protocol header from a code stream transmission packet of the code stream, and determining a timestamp carried by the real-time transport protocol header according to a packaging rule corresponding to the real-time transport protocol;
and when the plurality of segmented videos are determined to be the same frame according to the timestamp, splicing the segmented videos of the same frame according to the timestamp.
In the implementation process, the display control of the divided videos is carried out through the time stamps, and the decoding nodes are matched with the corresponding coding nodes, so that the problem that the output of the high-resolution videos is not synchronous at present can be solved.
In a third aspect, an embodiment of the present application provides a video source synchronization apparatus, applied to an encoding node, where the apparatus includes:
the acquisition module is used for acquiring a segmentation video;
the processing module is used for determining a main node video frame number of the segmented video when the coding node is a main node, and sending source synchronization information to a slave node, wherein the main node video frame number is used as a synchronization frame number of the main node; or when the coding node is a slave node, determining a slave node video frame number of the segmented video, and determining a synchronous frame number of the slave node based on the master node video frame number and the slave node video frame number in the source synchronization information;
and the sending module is used for packing the code stream after the divided video is coded according to the synchronous frame number and sending the code stream to the decoding node corresponding to the coding node so as to enable the decoding node to decode the code stream and splice the divided video.
In the implementation process, the source synchronization information can be sent through the master node, the slave node determines the synchronization frame number through the source synchronization information and the slave node video frame number, and the segmented video is encoded, decoded and spliced based on the synchronization frame number, so that different decoding nodes splice different segmented videos in a frame alignment manner, and the problem of asynchronous output of the high-resolution video existing at present can be solved.
Optionally, the processing module may be specifically configured to:
and when the slave node receives the source synchronization information before acquiring the slave node video frame number, taking the master node video frame number as the slave node video frame number.
When the slave node receives the source synchronization information after acquiring the slave node video frame number, taking the slave node video frame number as the slave node synchronization frame number.
In the implementation process, the master node video frame number can be acquired by the slave node and is compared with the slave node video frame number to obtain the synchronous frame number, so that the master node and the slave node can send the video code stream to the decoding node according to the synchronous frame number, and the frame synchronization among the nodes is realized.
Optionally, the processing module may be further configured to:
and after the master node collects the master node video frame number and passes a preset time threshold, sending the source synchronization information to the slave node.
In the implementation process, redundancy can be increased through a preset time threshold, and time deviation is prevented when video frames are collected.
Optionally, the sending module may include:
the acquisition submodule is used for acquiring the code stream by frames according to the synchronous frame number;
the packaging submodule is used for packaging the code stream based on a real-time transmission protocol to obtain a code stream transmission packet, the code stream transmission packet comprises a real-time transmission protocol header positioned in front of the code stream, and the real-time transmission protocol header comprises a timestamp;
and the sending submodule is used for sending the code stream transmission packet to the decoding node corresponding to the coding node.
In the implementation process, the code stream is packaged and encapsulated through an RTP (real-time transport protocol), and a time stamp is added in the header of the video code stream, so that a receiver can perform display control on the video according to the time stamp.
Optionally, the processing module may be further configured to:
and when the coding node is a slave node and the master node belong to the same group, determining the slave node video frame number of the segmented video, and determining the synchronous frame number based on the master node video frame number and the slave node video frame number in the source synchronization information, wherein each group comprises a master node and at least one slave node.
In the implementation process, the processing module can group a plurality of coding nodes, source synchronization can be carried out on a plurality of segmented videos of the video source through the same group of coding nodes, the same group of coding nodes can synchronize the segmented videos of the same video source, and source synchronization can be carried out only between the segmented videos of the same video source, so that the transmission efficiency and the source synchronization efficiency can be improved.
Optionally, the sending module may further be specifically configured to:
and the coding node sends the code stream to one or more corresponding decoding nodes so that the decoding nodes perform multi-path decoding on the code stream.
In the implementation process, when the resolution of the transmitted video is lower than a preset threshold, the starting module sends the code stream of the video to one or more decoding nodes through the plurality of coding nodes, and the code stream is subjected to multi-path decoding through the decoding nodes, so that the resource consumption can be reduced, and the decoding efficiency is improved.
In a fourth aspect, an embodiment of the present application provides another video source synchronization apparatus, which is applied to a decoding node, and the apparatus includes:
the receiving module is used for receiving the code stream sent from the coding node;
and the decoding module is used for decoding the code stream and splicing the segmented video.
In the implementation process, the video source synchronization device applied to the decoding node can be matched with the above device, the code stream sent from the coding node is obtained through the decoding node, and the segmented video aligned with the frame is decoded and spliced, so that the problem that the high-resolution video output is not synchronous at present can be solved.
Optionally, the decoding module may include:
the determining submodule is used for extracting a real-time transmission protocol header from a code stream transmission packet of the code stream and determining a timestamp carried by the real-time transmission protocol header according to a packaging rule corresponding to the real-time transmission protocol;
and the splicing submodule is used for splicing the segmented videos of the same frame according to the time stamp when the plurality of segmented videos are determined to be the same frame according to the time stamp.
In the implementation process, the display control of the divided videos is performed through the timestamps, and the problem that the output of the high-resolution videos is not synchronous at present can be solved through the matching of the decoding nodes and the corresponding coding nodes.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a schematic structural diagram of a video source synchronization system according to an embodiment of the present disclosure;
fig. 2 is a flowchart of a video source synchronization method according to an embodiment of the present application;
fig. 3 is a flowchart of a step of determining a sync frame number according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of source synchronization of a master node and a slave node according to an embodiment of the present disclosure;
fig. 5 is a flowchart of a step of performing packet transmission on a video stream according to an embodiment of the present application;
FIG. 6 is a flow chart of another video source synchronization method according to an embodiment of the present application;
FIG. 7 is a flowchart illustrating a process of decoding a bitstream and splicing a segmented video according to an embodiment of the present application;
FIG. 8 is a diagram of a video source synchronization apparatus provided in an embodiment of the present application;
fig. 9 is a schematic diagram of another video source synchronization apparatus provided in the embodiment of the present application.
Icon: 80-video source synchronization means; 81-an acquisition module; 82-a processing module; 83-a sending module; 90-video source synchronization means; 91-a receiving module; 92-a decoding module.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a video source synchronization system according to an embodiment of the present application, where the video source synchronization system includes: the video coding device comprises a video signal source, a picture divider, a plurality of coding nodes and decoding nodes corresponding to the coding nodes, wherein the coding nodes comprise a main node and a plurality of slave nodes. In one embodiment, as shown in fig. 1, there are a plurality of decoding nodes and a plurality of encoding nodes in the video source synchronization system, wherein the video from the video source can be divided by the picture divider, and each encoding node can correspond to one or more decoding nodes (not shown in the figure).
Referring to fig. 2, fig. 2 is a flowchart of a video source synchronization method provided in an embodiment of the present application, where the method may be applied to a decoding node in the video source synchronization system, and the method may include the following steps:
in step S21, a divided video is acquired.
The video source synchronization system comprises a video source synchronization system, a video source synchronization system and a video source segmentation device, wherein a picture splitter in the video source synchronization system can divide a picture of a high-resolution video into a plurality of parts to obtain a plurality of segmented videos, and then the plurality of segmented videos are transmitted to different coding nodes.
In step S22, when the coding node is a master node, determining a master node video frame number of the segmented video, and sending source synchronization information to a slave node, where the master node video frame number is used as a synchronization frame number of the master node; or when the coding node is a slave node, determining a slave node video frame number of the segmented video, and determining a synchronous frame number of the slave node based on the master node video frame number and the slave node video frame number in the source synchronization information.
The method comprises the steps that a master node and slave nodes continuously acquire source image information from a segmented video, and the currently acquired master node video frame number and slave node video frame number are determined according to the source image information. The source image information may include an image for segmenting each frame of the video, and the source synchronization information may include a video frame number acquired by the master node and a source synchronization packet, so that the slave node determines a synchronization frame number based on the master node video frame number and the slave node video frame number when receiving the source synchronization information.
In step S23, the code stream after the segmented video coding is packed and sent to a decoding node corresponding to the coding node according to the synchronization frame number, so that the decoding node decodes the code stream and splices the segmented video.
Therefore, according to the embodiment, the source synchronization information can be sent by the main node, the slave node determines the synchronous frame number through the source synchronization information and the slave node video frame number, and the segmented video is encoded based on the synchronous frame number, so that the problem of asynchronous encoding of different encoding nodes can be solved, and different segmented video frames transmitted to each corresponding decoding node by a plurality of encoding nodes are aligned, so that different decoding nodes can splice different segmented videos in a frame alignment manner, and the problem of asynchronous output of the high-resolution video existing at present can be solved.
Wherein the 2K, 4K, 8K resolution referred to in this application is understood to be the screen resolution level. For example, the 4K resolution means that the pixel value per line in the horizontal direction reaches or approaches 4096 pixels, and there are various derivative resolutions of the 4K resolution depending on the range of use regardless of the frame ratio, and for example, 4096 × 3112 by Full alert 4K, 3656 × 2664 by Academy 4K, 3840 × 2160 by the UHDTV standard, and the like all belong to the category of the 4K resolution.
It should be noted that the above method can be used for solving the problem of video output asynchronism occurring when a plurality of coding nodes encode a high-resolution video, and can also be used for other resolution videos, such as a low-resolution video with a resolution lower than 2K, and when a plurality of coding nodes are used for encoding the low-resolution video, the above video source synchronization method can also be used for avoiding the problem of video output asynchronism that may occur.
Optionally, for step S22, the embodiment of the present application determines the sfn by comparing the primary node video frame number and the secondary node video frame number, please refer to fig. 3, where fig. 3 is a flowchart of a step of determining the sfn provided in the embodiment of the present application, and the step S22 may include the following steps:
in step S221, when the slave node receives the source synchronization information before acquiring the slave node video frame number, taking the master node video frame number as the slave node synchronization frame number.
In step S222, when the slave node receives the source synchronization information after acquiring the slave node video frame number, the slave node video frame number is used as the slave node synchronization frame number.
Exemplary ofReferring to fig. 4, fig. 4 is a schematic diagram of master-slave node source synchronization according to an embodiment of the present disclosure, where each time a segmented video of one frame is obtained, a pulse signal is generated, and in the process of obtaining the segmented video, the master node acquires the segmented video to obtain a master node video frame number Fm1To FmnAcquiring and dividing video from the slave node to obtain slave node video frame number Fs1To Fsn
Taking one of the pulse signals as an example, TwFor a preset time threshold, the master node transmits source synchronization information to the slave node after the preset time threshold, TdThe time offset of the same frame is obtained for the master node and the slave node.
In the source synchronization process, the main node and the slave node collect the segmented video of the same input source to respectively obtain the main node video frame number Fm1And slave node video frame number Fs1
For the main node, the main node video frame number F collected by the main nodem1I.e. the sync frame number Fsync1The main node collects the video frame number F of the main nodem1Then, sending source synchronization information to the slave node, and for the slave node, when the slave node collects the slave node video frame number Fs1When the source synchronous information is received before, the self synchronous frame number F is setsync1Is determined as Fm1The slave node is collecting the slave node video frame number FS1When the main node video frame number in the source synchronous information is received later, the main node video frame number in the source synchronous information is not updated, and the self synchronous frame number Fsync1Is determined as Fs1
Repeating the steps to determine each synchronous frame number F of each nodesyncn
When the video frame number of the main node in the source synchronization information is determined not to be updated, a counter can be increased by locally maintaining a frame number, and the synchronous frame number of the slave node is increased according to the video frame number of the slave node through the counter.
Therefore, the slave node acquires the video frame number of the master node and compares the video frame number with the video frame number of the slave node to obtain the synchronous frame number, so that the master node and the slave node can send the code stream of the video to the decoding node according to the synchronous frame number, and the frame synchronization among the nodes is realized.
Optionally, for step S22, when the encoding node is a master node, determining a master node video frame number of the split video, and sending source synchronization information to a slave node includes:
and after the master node collects the master node video frame number and passes a preset time threshold, sending the source synchronization information to the slave node.
The master node and the slave node acquire the same frame video frame number and have time deviation, and the preset time threshold is larger than the time deviation. For example, the time offset is about 2ms, and therefore, it is possible to set the transmission source synchronization information to the slave node 5ms after the master node captures the master node video frame number, and by setting the preset time threshold to 5ms, it is possible to increase redundancy and prevent the time offset when capturing the video frame.
Optionally, as for step S23, in this embodiment of the present application, a video bitstream may be packed by using a real-time transport protocol, please refer to fig. 5, where fig. 5 is a flowchart of a step of packing and sending a video bitstream provided in this embodiment of the present application, and the step may include the following steps:
in step S231, the code stream is obtained by frame according to the sync frame number.
Illustratively, the coding node may encode the split video by using a coding module in a process of acquiring a code stream, and put (push) a sync frame number into a First-in First-out (FIFO) queue, put a sync frame number into a queue tail of the FIFO queue, and respectively dequeue (pop) the sync frame number from a queue head of the FIFO queue by a master node and a slave node, thereby acquiring the code stream of the encoded video by frame from the coding module according to the sync frame number.
In step S232, the code stream is packed based on a real-time transport protocol to obtain a code stream transport packet, where the code stream transport packet includes a real-time transport protocol header located in front of the code stream, and the real-time transport protocol header includes a timestamp.
The code stream is packed based on Real-time Transport Protocol (RTP), an RTP header is added in front of the coded code stream, the header contains a 32-bit timestamp and reflects the sampling time of the first octet of the RTP message, and the decoding node can calculate delay and delay jitter through the timestamp and perform display control.
The calculation formula of the timestamp may be:
fsync 90000 frame rate
Wherein 90000 is the clock frequency corresponding to the timestamp determined according to the RFC3984/RFC7798 protocol (RTP payload is defined as h.264/h.265).
In step S233, the code stream transport packet is sent to the decoding node corresponding to the encoding node.
Therefore, the code stream is packaged through the RTP, the time stamp is added in the header of the video code stream, and the receiver can control the display of the video according to the time stamp.
Optionally, a plurality of coding nodes in the video source synchronization system related to the present application may be pre-divided into a plurality of groups, each group includes a master node and at least one slave node, and for step S22, when the coding node is a slave node, determining a slave node video frame number of the divided video, and determining a synchronization frame number based on the master node video frame number and the slave node video frame number in the source synchronization information, the specific steps may include:
and when the coding node is a slave node and the master node belong to the same group, determining the slave node video frame number of the segmented video, and determining the synchronous frame number based on the master node video frame number and the slave node video frame number in the source synchronization information.
Therefore, the coding nodes can be grouped, the source synchronization can be carried out on the segmented videos of the video source through the coding nodes in the same group, the segmented videos of the same video source are synchronized through the coding nodes in the same group, and the source synchronization can be carried out only among the segmented videos of the same video source, so that the transmission efficiency and the source synchronization efficiency are improved.
Optionally, for step S23, when the resolution of the video is lower than a preset threshold, multiple encoding nodes correspond to one or more decoding nodes, and the code stream of the video is sent to the decoding node corresponding to the encoding node according to the synchronization frame number, so that the decoding node decodes the code stream, which may specifically include:
and the coding node sends the code stream to one or more corresponding decoding nodes so that the decoding nodes perform multi-path decoding on the code stream.
The preset resolution threshold value can be set to be 2K resolution, when the resolution of the video is lower than 2K, a plurality of encoding nodes can correspond to one decoding node, the encoding nodes can also correspond to a plurality of decoding nodes, and the code stream is subjected to multi-path decoding through the decoding nodes.
Therefore, when the resolution of the transmitted video is lower than the preset threshold, the code stream of the video is sent to one or more decoding nodes through the plurality of coding nodes, the code stream is subjected to multi-path decoding through the decoding nodes, the resource consumption is reduced, and the decoding efficiency is improved.
Based on the same inventive concept, an embodiment of the present application further provides a video source synchronization method applied to a decoding node, please refer to fig. 6, where fig. 6 is a flowchart of another video source synchronization method provided in the embodiment of the present application, and the method may include the following steps:
in step S61, a codestream issued from an encoding node is received.
When the coding node corresponding to the decoding node is a main node, the synchronous frame number is a main node video frame number of the segmented video; and when the coding node is a slave node, the synchronous frame number is determined according to the master node video frame number and the slave node video frame number in the source synchronous information sent by the master node. The specific method for determining the synchronization frame number may refer to step S22 described above, and is not described in detail.
In step S62, the code stream is decoded and the split videos are spliced.
Therefore, the video source synchronization method applied to the decoding node can be matched through the implementation steps, the code stream sent from the coding node is obtained through the decoding node, the segmented video aligned with the frame is decoded and spliced, and the problem that the high-resolution video output is not synchronous at present is solved.
Optionally, as for step S62, the present application provides a method for decoding a video bitstream by using a real-time transport protocol, please refer to fig. 7, where fig. 7 is a flowchart of a step of decoding the bitstream and splicing a split video according to the present application, and the step S62 may include the following steps:
in step S621, a real-time transport protocol header is extracted from a code stream transport packet of the code stream, and a timestamp carried by the real-time transport protocol header is determined according to a packing rule corresponding to the real-time transport protocol.
In step S622, when it is determined that the plurality of divided videos are the same frame according to the time stamp, the divided videos of the same frame are spliced according to the time stamp.
The decoding node can analyze a 32-bit timestamp field according to a packaging rule by extracting a 12-byte RTP header from an RTP message, so as to determine a timestamp, and control the display output of a video according to the timestamp.
Therefore, the display control of the split video is carried out through the timestamp, and the decoding node is matched with the corresponding coding node, so that the problem that the output of the high-resolution video is not synchronous at present can be solved.
Referring to fig. 8, based on the same inventive concept, fig. 8 is a schematic diagram of a video source synchronization apparatus provided in an embodiment of the present application, where the video source synchronization apparatus 80 can be applied to an encoding node, and can include:
the acquisition module 81 is used for acquiring a segmented video;
a processing module 82, configured to determine a master node video frame number of the segmented video when the coding node is a master node, and send source synchronization information to a slave node, where the master node video frame number is used as a synchronization frame number of the master node; or when the coding node is a slave node, determining a slave node video frame number of the segmented video, and determining a synchronous frame number of the slave node based on the master node video frame number and the slave node video frame number in the source synchronization information;
and the sending module 83 is configured to pack the code stream after the coding of the segmented video according to the synchronous frame number and send the packed code stream to the decoding node corresponding to the coding node, so that the decoding node decodes the code stream and splices the segmented video.
Therefore, by the device, the main node can send source synchronous information, the slave node determines the synchronous frame number through the source synchronous information and the slave node video frame number, and the segmented video is coded, decoded and spliced based on the synchronous frame number, so that different decoding nodes splice different segmented videos in a frame alignment manner, and the problem of asynchronous output of high-resolution videos existing at present is solved.
Optionally, the processing module 82 may be specifically configured to:
and when the slave node receives the source synchronization information before acquiring the slave node video frame number, taking the master node video frame number as the slave node video frame number.
When the slave node receives the source synchronization information after acquiring the slave node video frame number, taking the slave node video frame number as the slave node synchronization frame number.
Therefore, the master node video frame number can be obtained through the slave node, and the master node video frame number is compared with the slave node video frame number to obtain the synchronous frame number, so that the frame synchronization among all the nodes is realized, and the master node and the slave node can send the code stream of the video to the decoding node according to the synchronous frame number.
Optionally, the processing module 82 may be further configured to:
and after the master node collects the master node video frame number and passes a preset time threshold, sending the source synchronization information to the slave node.
Therefore, redundancy can be increased through the preset time threshold, and time deviation is prevented when the video frame is collected.
Optionally, the sending module 83 may include:
the acquisition submodule is used for acquiring the code stream by frames according to the synchronous frame number;
the packaging submodule is used for packaging the code stream based on a real-time transmission protocol to obtain a code stream transmission packet, the code stream transmission packet comprises a real-time transmission protocol header positioned in front of the code stream, and the real-time transmission protocol header comprises a timestamp;
and the sending submodule is used for sending the code stream transmission packet to the decoding node corresponding to the coding node.
Therefore, the packing submodule packs and encapsulates the code stream through an RTP (real-time transport protocol), and adds a timestamp in the header of the video code stream, so that a receiver can perform display control on the video according to the timestamp.
Optionally, the processing module 82 may be further configured to:
and when the coding node is a slave node and the master node belong to the same group, determining the slave node video frame number of the segmented video, and determining the synchronous frame number based on the master node video frame number and the slave node video frame number in the source synchronization information, wherein each group comprises a master node and at least one slave node.
Therefore, the processing module can group the plurality of coding nodes, source synchronization can be carried out on the plurality of segmented videos of the video source through the plurality of coding nodes in the same group, the segmented videos of the same video source are synchronized through the plurality of coding nodes in the same group, and source synchronization can be carried out only among the segmented videos of the same video source, so that the transmission efficiency and the source synchronization efficiency are improved.
Optionally, the sending module 83 may be further specifically configured to:
and the coding node sends the code stream to one or more corresponding decoding nodes so that the decoding nodes perform multi-path decoding on the code stream.
Therefore, when the resolution of the transmitted video is lower than the preset threshold, the starting module sends the code stream of the video to one or more decoding nodes through the plurality of coding nodes, and the decoding nodes decode the code stream in a multi-path manner, so that the resource consumption can be reduced, and the decoding efficiency can be improved.
Referring to fig. 9, based on the same inventive concept, fig. 9 is a schematic diagram of another video source synchronization apparatus provided in an embodiment of the present application, where the video source synchronization apparatus 90 can be applied to a decoding node, and can include:
a receiving module 91, configured to obtain, from a coding node corresponding to the decoding node, a code stream of a segmented video in frames according to a synchronous frame number, where the synchronous frame number is a main node video frame number of the segmented video when the coding node corresponding to the decoding node is a main node; when the coding node is a slave node, the synchronous frame number is determined according to the master node video frame number and the slave node video frame number in the source synchronous information sent by the master node;
and the decoding module 92 is configured to decode the code stream and splice the segmented videos.
Therefore, the device can be matched with a video source synchronization device applied to a decoding node, the code stream sent from the coding node is obtained through the decoding node, and the segmented video with aligned frames is decoded and spliced, so that the problem that the high-resolution video output is not synchronous at present is solved.
Optionally, the decoding module 92 may include:
the determining submodule is used for extracting a real-time transmission protocol header from a code stream transmission packet of the code stream and determining a timestamp carried by the real-time transmission protocol header according to a packaging rule corresponding to the real-time transmission protocol;
and the splicing submodule is used for splicing the segmented videos of the same frame according to the time stamp when the plurality of segmented videos are determined to be the same frame according to the time stamp.
Therefore, the display control of the divided videos is carried out through the time stamps, and the problem that the output of the high-resolution videos is not synchronous at present is solved through the matching of the decoding nodes and the corresponding coding nodes.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
In addition, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Furthermore, the functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
It should be noted that the functions, if implemented in the form of software functional modules and sold or used as independent products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. A video source synchronization method applied to an encoding node comprises the following steps:
acquiring a segmentation video;
when the coding node is a master node, determining master node video frame numbers of the segmented videos, and sending source synchronization information to a slave node, wherein the master node video frame numbers are used as synchronization frame numbers of the master node; or when the coding node is a slave node, determining a slave node video frame number of the segmented video, and determining a synchronous frame number of the slave node based on the master node video frame number and the slave node video frame number in the source synchronization information;
and packing the code stream after the segmented video is coded according to the synchronous frame number and sending the code stream to a decoding node corresponding to the coding node so that the decoding node decodes the code stream and splices the segmented video.
2. The method of claim 1, wherein determining a slave node video frame number for the segmented video when the encoding node is a slave node, wherein determining a synchronization frame number based on the master node video frame number and the slave node video frame number in the source synchronization information comprises:
when the slave node receives the source synchronization information before acquiring the slave node video frame number, taking the master node video frame number as the slave node synchronization frame number;
when the slave node receives the source synchronization information after acquiring the slave node video frame number, taking the slave node video frame number as the slave node synchronization frame number.
3. The method of claim 1, wherein determining a master node video frame number of the segmented video when the coding node is a master node and sending source synchronization information to a slave node comprises:
and after the master node collects the master node video frame number and passes a preset time threshold, sending the source synchronization information to the slave node.
4. The method of claim 1, wherein the packing and sending the coded bitstream of the segmented video to the decoding node corresponding to the coding node according to the sfn comprises:
acquiring the code stream by frames according to the synchronous frame number;
packing the code stream based on a real-time transmission protocol to obtain a code stream transmission packet, wherein the code stream transmission packet comprises a real-time transmission protocol header positioned in front of the code stream, and the real-time transmission protocol header comprises a timestamp;
and sending the code stream transmission packet to the decoding node corresponding to the coding node.
5. The method of claim 1, wherein determining a slave node video frame number for the segmented video when the coding node is a slave node, wherein determining a synchronization frame number based on the master node video frame number and the slave node video frame number in the source synchronization information comprises:
and when the coding node is a slave node and the master node belong to the same group, determining the slave node video frame number of the segmented video, and determining the synchronous frame number based on the master node video frame number and the slave node video frame number in the source synchronization information, wherein each group comprises a master node and at least one slave node.
6. The method according to claim 1, wherein when the resolution of the video is lower than a preset threshold, a plurality of the coding nodes correspond to one or more of the decoding nodes, and the packing and sending the coded code stream of the segmented video to the decoding node corresponding to the coding node according to the synchronization frame number so that the decoding node decodes the code stream comprises:
and the coding node sends the code stream to one or more corresponding decoding nodes so that the decoding nodes perform multi-path decoding on the code stream.
7. A video source synchronization method applied to a decoding node comprises the following steps:
receiving a code stream sent from a coding node;
and decoding the code stream and splicing the split videos.
8. The method of claim 7, wherein the decoding the codestream and splicing the segmented video comprises:
extracting a real-time transport protocol header from a code stream transmission packet of the code stream, and determining a timestamp carried by the real-time transport protocol header according to a packaging rule corresponding to the real-time transport protocol;
and when the plurality of segmented videos are determined to be the same frame according to the timestamp, splicing the segmented videos of the same frame according to the timestamp.
9. A video source synchronization apparatus applied to an encoding node, the apparatus comprising:
the acquisition module is used for acquiring a segmentation video;
the processing module is used for determining a main node video frame number of the segmented video when the coding node is a main node, and sending source synchronization information to a slave node, wherein the main node video frame number is used as a synchronization frame number of the main node; or when the coding node is a slave node, determining a slave node video frame number of the segmented video, and determining a synchronous frame number of the slave node based on the master node video frame number and the slave node video frame number in the source synchronization information;
and the sending module is used for packing the code stream after the divided video is coded according to the synchronous frame number and sending the code stream to a decoding node corresponding to the coding node so as to enable the decoding node to decode the code stream and splice the divided video.
10. A video source synchronization apparatus applied to a decoding node, the apparatus comprising:
the receiving module is used for receiving the code stream sent from the coding node;
and the decoding module is used for decoding the code stream and splicing the split videos.
CN202110114720.0A 2021-01-27 2021-01-27 Video source synchronization method and device Pending CN112911346A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110114720.0A CN112911346A (en) 2021-01-27 2021-01-27 Video source synchronization method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110114720.0A CN112911346A (en) 2021-01-27 2021-01-27 Video source synchronization method and device

Publications (1)

Publication Number Publication Date
CN112911346A true CN112911346A (en) 2021-06-04

Family

ID=76119249

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110114720.0A Pending CN112911346A (en) 2021-01-27 2021-01-27 Video source synchronization method and device

Country Status (1)

Country Link
CN (1) CN112911346A (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010148089A (en) * 2008-12-19 2010-07-01 Korea Electronics Telecommun Image splitting base ultrahigh resolution video encoding and decoding apparatus and method of controlling the same
CN103763556A (en) * 2014-01-29 2014-04-30 广东威创视讯科技股份有限公司 Video image encoding and decoding device and method and transmission system and method
US20140119456A1 (en) * 2012-11-01 2014-05-01 Microsoft Corporation Encoding video into lower resolution streams
CN106060582A (en) * 2016-05-24 2016-10-26 广州华多网络科技有限公司 Video transmission system, video transmission method and video transmission apparatus
CN107690074A (en) * 2016-08-03 2018-02-13 中国电信股份有限公司 Video coding and restoring method, audio/video player system and relevant device
CN108881955A (en) * 2017-05-08 2018-11-23 Tcl新技术(惠州)有限公司 A kind of method and system for realizing the output of distributed node equipment audio video synchronization
JP2018201159A (en) * 2017-05-29 2018-12-20 日本電信電話株式会社 Video processing method, video processing system, and video transmitting apparatus
CN110650345A (en) * 2019-09-25 2020-01-03 杭州当虹科技股份有限公司 Master-slave multi-node coding method for 8K ultra-high definition
CN110933457A (en) * 2019-12-02 2020-03-27 杭州当虹科技股份有限公司 Multi-node low-delay parallel coding method for 8K ultra-high definition
CN111050025A (en) * 2019-12-04 2020-04-21 深圳市创凯智能股份有限公司 Audio and video display control method, device and system and computer readable storage medium
CN111417005A (en) * 2020-04-27 2020-07-14 北京淳中科技股份有限公司 Video signal synchronous encoding method, device, system and encoding end
CN111556321A (en) * 2020-04-24 2020-08-18 西安万像电子科技有限公司 Video decoding method, device and system
CN111935542A (en) * 2020-08-21 2020-11-13 广州酷狗计算机科技有限公司 Video processing method, video playing method, device, equipment and storage medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010148089A (en) * 2008-12-19 2010-07-01 Korea Electronics Telecommun Image splitting base ultrahigh resolution video encoding and decoding apparatus and method of controlling the same
US20140119456A1 (en) * 2012-11-01 2014-05-01 Microsoft Corporation Encoding video into lower resolution streams
CN103763556A (en) * 2014-01-29 2014-04-30 广东威创视讯科技股份有限公司 Video image encoding and decoding device and method and transmission system and method
CN106060582A (en) * 2016-05-24 2016-10-26 广州华多网络科技有限公司 Video transmission system, video transmission method and video transmission apparatus
CN107690074A (en) * 2016-08-03 2018-02-13 中国电信股份有限公司 Video coding and restoring method, audio/video player system and relevant device
CN108881955A (en) * 2017-05-08 2018-11-23 Tcl新技术(惠州)有限公司 A kind of method and system for realizing the output of distributed node equipment audio video synchronization
JP2018201159A (en) * 2017-05-29 2018-12-20 日本電信電話株式会社 Video processing method, video processing system, and video transmitting apparatus
CN110650345A (en) * 2019-09-25 2020-01-03 杭州当虹科技股份有限公司 Master-slave multi-node coding method for 8K ultra-high definition
CN110933457A (en) * 2019-12-02 2020-03-27 杭州当虹科技股份有限公司 Multi-node low-delay parallel coding method for 8K ultra-high definition
CN111050025A (en) * 2019-12-04 2020-04-21 深圳市创凯智能股份有限公司 Audio and video display control method, device and system and computer readable storage medium
CN111556321A (en) * 2020-04-24 2020-08-18 西安万像电子科技有限公司 Video decoding method, device and system
CN111417005A (en) * 2020-04-27 2020-07-14 北京淳中科技股份有限公司 Video signal synchronous encoding method, device, system and encoding end
CN111935542A (en) * 2020-08-21 2020-11-13 广州酷狗计算机科技有限公司 Video processing method, video playing method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN106162235B (en) For the method and apparatus of Switch Video stream
RU2117411C1 (en) Device for video signal compression and synchronization device
JP6868838B2 (en) Transmitter, receiver, transmitter and receiver
CA2775318C (en) Method and system for wireless communication of audio in wireless networks
KR20140008237A (en) Packet transmission and reception apparatus and method in mmt hybrid transmissing service
US8547995B2 (en) High definition video/audio data over IP networks
CN103986960A (en) Method for single-video picture division route teletransmission precise synchronization tiled display
EP2658267A1 (en) Transmission device, transmission method, reception device, and reception method
KR101093350B1 (en) Method and apparatus for performing synchronised audio and video presentation
JP2023165959A (en) Transmission method, reception method, transmission device, and reception device
JP2022118086A (en) Reception method, transmission method, and receiver and transmitter
CN111669605A (en) Method and device for synchronizing multimedia data and associated interactive data thereof
JP2023164690A (en) Transmission device, reception device, transmission method and reception method
EP2553936B1 (en) A device for receiving of high-definition video signal with low-latency transmission over an asynchronous packet network
CN112911346A (en) Video source synchronization method and device
Seo et al. A new timing model design for MPEG Media Transport (MMT)
CN111417005B (en) Video signal synchronous encoding method, device, system and encoding end
AU2017392150B2 (en) Method for encoding and processing raw UHD video via an existing HD video architecture
CN112995714A (en) Method and device for converting private video stream into RTMP standard stream
CN117278778A (en) Image processing method, device, splicing controller and image processing system
US7233366B2 (en) Method and apparatus for sending and receiving and for encoding and decoding a telop image
US11758108B2 (en) Image transmission method, image display device, image processing device, image transmission system, and image transmission system with high-transmission efficiency
JP6792038B2 (en) Receiving method and receiving device
CN115174884A (en) Multi-camera synchronization information transmission and storage method based on SEI
CN117981328A (en) Multi-channel synchronous playing method and device for audio and video, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210604

RJ01 Rejection of invention patent application after publication