Nothing Special   »   [go: up one dir, main page]

CN107864393A - The method and device that video is shown with captioning synchronization - Google Patents

The method and device that video is shown with captioning synchronization Download PDF

Info

Publication number
CN107864393A
CN107864393A CN201711145357.9A CN201711145357A CN107864393A CN 107864393 A CN107864393 A CN 107864393A CN 201711145357 A CN201711145357 A CN 201711145357A CN 107864393 A CN107864393 A CN 107864393A
Authority
CN
China
Prior art keywords
video
caption
data
caption data
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711145357.9A
Other languages
Chinese (zh)
Inventor
魏勇邦
王云刚
沙建鹤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Hisense Electronics Co Ltd
Original Assignee
Qingdao Hisense Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Hisense Electronics Co Ltd filed Critical Qingdao Hisense Electronics Co Ltd
Priority to CN201711145357.9A priority Critical patent/CN107864393A/en
Publication of CN107864393A publication Critical patent/CN107864393A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/434Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams, extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
    • H04N21/4345Extraction or processing of SI, e.g. extracting service information from an MPEG stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • H04N21/4355Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream involving reformatting operations of additional data, e.g. HTML pages on a television screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application provides the method and device that a kind of video is shown with captioning synchronization, and methods described includes:Obtain for indicating that video data to be played shows the video Presentation Time Stamp of time, and obtain for indicating that caption data to be played shows the Subtitle Demonstration timestamp scope of time;If the video Presentation Time Stamp is in the Subtitle Demonstration timestamp scope, the incidence relation of the video data and the caption data is established;The video data and caption data being associated according to the incidence relation synchronism output.Using this method, the efficiency that video is shown with captioning synchronization can be improved, avoids video from showing that process influences video playback fluency with captioning synchronization, lifts Consumer's Experience.

Description

The method and device that video is shown with captioning synchronization
Technical field
The application is related to multimedia technology field, more particularly to the method and device that a kind of video is shown with captioning synchronization.
Background technology
At present, when playing video, it will usually synchronously displaying subtitle, so that user more fully understands video content, lifting Consumer's Experience.For storage mode based on caption data, captions can be divided into three kinds, respectively embedded captions, interior envelope Captions and plug-in captions, wherein, embedded captions refer to, subtitling image is embedding on the video images, i.e., captions and video image are one Body, so as to be referred to as embedded captions;Interior envelope captions refer to that caption data and video data are independently encapsulated in identical file In, i.e., captions and video image independently of each other but be stored in identical file, so as to be referred to as in envelope captions;Plug-in captions are Refer to, caption data and video data are individually enclosed in different files, i.e., caption data is individually deposited, so as to referred to as plug-in word Curtain.
At present, it is when playing video image, based on current playing progress rate for interior envelope captions and plug-in captions Corresponding caption data is found, the caption data is then converted into subtitling image again, finally by video image and subtitling image Synthesized, to realize the simultaneous display of video and captions.
However, due to being then the caption data corresponding to searching in subtitle file when playing video image, and searched Journey needs necessarily to take, and therefore, video shows smoothness that is less efficient, and influenceing video playback with captioning synchronization Degree, for high-resolution, the video source of high frame per second, the influence to video playback fluency becomes apparent;Meanwhile With the difference of subtitle file, search and take also by difference, therefore, fluency when user watches different video is experienced also not Together.
The content of the invention
In view of this, the application provides the method and device that a kind of video is shown with captioning synchronization, to improve video and word The efficiency of curtain simultaneous display, avoid video from showing that process influences video playback fluency with captioning synchronization, lift Consumer's Experience.
Specifically, the application is achieved by the following technical solution:
According to the first aspect of the embodiment of the present application, there is provided a kind of method that video is shown with captioning synchronization, methods described Including:
Obtain for indicating that video data to be played shows the video Presentation Time Stamp of time, and obtain and treated for instruction The caption data of broadcasting shows the Subtitle Demonstration timestamp scope of time;
If the video Presentation Time Stamp is in the Subtitle Demonstration timestamp scope, establish the video data with The incidence relation of the caption data;
The video data and caption data being associated according to the incidence relation synchronism output.
According to the second aspect of the embodiment of the present application, there is provided the device that a kind of video is shown with captioning synchronization, described device Including:
Acquisition module, for obtaining the video Presentation Time Stamp for being used for indicating that video data to be played shows the time, and Obtain for indicating that caption data to be played shows the Subtitle Demonstration timestamp scope of time;
Module is established, if being in for the video Presentation Time Stamp in the Subtitle Demonstration timestamp scope, is established The incidence relation of the video data and the caption data;
Output module, for the video data and caption data being associated according to the incidence relation synchronism output.
As seen from the above-described embodiment, by before playing video data, i.e., showing the time according to the video of video data The Subtitle Demonstration timestamp scope of stamp and caption data, establish the incidence relation of video data and caption data, it is possible to achieve During playing video data, caption data associated with it can be found by the incidence relation, so as to improve video and captions The efficiency of simultaneous display, avoid video and show that process influences video playback fluency with captioning synchronization, lift Consumer's Experience.
Brief description of the drawings
Fig. 1 is an example of the audio and video playing pipeline created in the prior art based on GStreamer frameworks;
Fig. 2 is one embodiment flow chart for the method that the embodiment of the present application video is shown with captioning synchronization;
Fig. 3 is an example of the audio and video playing pipeline that the application is created based on GStreamer frameworks;
Fig. 4 is another embodiment flow chart for the method that the embodiment of the present application video is shown with captioning synchronization;
Fig. 5 is the further embodiment flow chart for the method that the embodiment of the present application video is shown with captioning synchronization;
Fig. 6 is a kind of hardware configuration of multimedia play equipment where the device that the application video is shown with captioning synchronization Figure;
Fig. 7 is one embodiment block diagram for the device that the embodiment of the present application video is shown with captioning synchronization.
Embodiment
Here exemplary embodiment will be illustrated in detail, its example is illustrated in the accompanying drawings.Following description is related to During accompanying drawing, unless otherwise indicated, the same numbers in different accompanying drawings represent same or analogous key element.Following exemplary embodiment Described in embodiment do not represent all embodiments consistent with the application.On the contrary, they be only with it is such as appended The example of the consistent apparatus and method of some aspects be described in detail in claims, the application.
It is only merely for the purpose of description specific embodiment in term used in this application, and is not intended to be limiting the application. " one kind " of singulative used in the application and appended claims, " described " and "the" are also intended to including majority Form, unless context clearly shows that other implications.It is also understood that term "and/or" used herein refers to and wrapped Containing the associated list items purpose of one or more, any or all may be combined.
It will be appreciated that though various information, but this may be described using term first, second, third, etc. in the application A little information should not necessarily be limited by these terms.These terms are only used for same type of information being distinguished from each other out.For example, do not departing from In the case of the application scope, the first information can also be referred to as the second information, and similarly, the second information can also be referred to as One information.Depending on linguistic context, word as used in this " if " can be construed to " ... when " or " when ... When " or " in response to determining ".
GStreamer frameworks are the multimedia frameworks for building Stream Media Application, and developer is based on GStreamer frames Frame can easily create various multimedia function components, such as multimedia player, video editor etc.. GStreamer frameworks use the architecture based on plug-in unit and pipeline, that is to say, that the function of above-mentioned multimedia function component is Realized based on pipeline.
As shown in figure 1, one for the audio and video playing pipeline in the prior art based on the establishment of GStreamer frameworks is shown , the audio and video playing pipeline 100 shown in Fig. 1 includes:File reads plug-in unit 110, demultiplexing plug-in unit 120, audio solution Code plug-in unit 130, video decoding plug-in 140, caption decoding plug-in unit 150, video caption integrated plugin 160, audio-visual synchronization plug-in unit 170th, audio output plug-in unit 180, and video caption output inserter 190.Wherein, file reads plug-in unit 110 and regarded for reading sound Frequency file, demultiplexing plug-in unit 120 are used for separating audio data, video data and caption data, and audio decoder plug-in unit 130 is used for Decoding audio data, video decoding plug-in 140 are used for decoding video data, and caption decoding plug-in unit 150 is used to decode caption data, Video caption integrated plugin 160 is used to find its corresponding caption data for each frame video image, then caption data is changed Into subtitling image, video image and subtitling image are synthesized, audio-visual synchronization plug-in unit 170 is for synchronized audio data and The video data of embedded caption data, audio output plug-in unit 180 are then used to export voice data, meanwhile, video caption output is inserted Part 190 is used to export the video image for synthesizing and having subtitling image.By the audio and video playing pipeline exemplified by Fig. 1, due to by word Curtain image is synthesized with video image, so as to realize the simultaneous display of video and captions.
However, conventionally, as be when playing video image, then corresponding caption data is searched, and Search procedure needs necessarily to take, and with the difference of subtitle file, searches and take also by difference, therefore, regard at present Frequency shows fluency that is less efficient, and influenceing video playback with captioning synchronization, causes Consumer's Experience bad.
Based on this, the application provides a kind of method that video is shown with captioning synchronization, is shown with improving video with captioning synchronization The efficiency shown, avoid video from showing that process influences video playback fluency with captioning synchronization, lift Consumer's Experience.
In order that those skilled in the art are clearly understood that the side that the video that the application provides is shown with captioning synchronization Method, list following embodiments and this method is illustrated.
Embodiment one:
Fig. 2 is referred to, one embodiment flow chart of the method shown for the embodiment of the present application video with captioning synchronization should Method may comprise steps of:
Step 201:Obtain for indicating that video data to be played shows the video Presentation Time Stamp of time, and obtain use In the Subtitle Demonstration timestamp scope for indicating the caption data display time to be played.
Step 202:If video Presentation Time Stamp is in Subtitle Demonstration timestamp scope, video data and captions are established The incidence relation of data.
Step 203:The video data and caption data being associated according to incidence relation synchronism output.
It is as follows, step 201 and step 203 are illustrated:
In the embodiment of the present application, video that the embodiment of the present application is provided is applied to base with the method that captioning synchronization is shown Exemplified by under the application scenarios that GStreamer frameworks progress video is shown with captioning synchronization:
It is specifically, different with audio and video playing pipeline of the prior art, such as the audio and video playing pipeline exemplified by Fig. 1 , in the embodiment of the present application, when it is determined that audio-video document includes caption data, can be created for the audio-video document Add video caption synchronously corresponding plug-in unit in the audio and video playing pipeline built, the video caption synchronously corresponding plug-in unit respectively with video Decoding plug-in, caption decoding plug-in unit, and video caption output inserter are connected, as shown in figure 3, being based on for the application One example of the audio and video playing pipeline that GStreamer frameworks create.
It will be appreciated by persons skilled in the art that inserted in audio and video playing pipeline exemplified by Fig. 3 except file is read Part 310, demultiplexing plug-in unit 320, video decoding plug-in 330, caption decoding plug-in unit 340, video caption synchronously correspond to plug-in unit 350, And beyond video caption output inserter 360, other plug-in units can also be included, such as audio decoder plug-in unit, audio-visual synchronization are inserted Part etc., and because the embodiment of the present application stresses the process that video and captioning synchronization show, therefore, for convenience, Fig. 3 The plug-in unit played for audio is not further related in exemplified audio and video playing pipeline.
It is as follows, the video and word of the embodiment of the present application offer are provided based on the audio and video playing pipeline exemplified by Fig. 3 The method of curtain simultaneous display:
In the embodiment of the present application, as shown in figure 3, video caption synchronously corresponding plug-in unit 350 and video decoding plug-in 330 and Caption decoding plug-in unit 340 is respectively connected with, then, video decoding plug-in 330, can be by the video after decoding obtains video data Data transfer is to the synchronous corresponding plug-in unit 350 of video caption, accordingly, caption decoding plug-in unit 340 after decoding obtains caption data, The caption data can also be transmitted to video caption and synchronously correspond to plug-in unit 350, subsequently, video caption synchronously corresponds to plug-in unit 350 A video data to be played and a caption data to be played can then be received.
In the embodiment of the present application, synchronously corresponding plug-in unit 350 can obtain regarding for video data to be played to video caption Frequency Presentation Time Stamp, for example, can be from PTS (Presentation Time Stamp, the time of display) word of the video data The video Presentation Time Stamp is obtained in section, the video Presentation Time Stamp is used to indicate that video data shows the time.Accordingly, video Captioning synchronization, which corresponds to plug-in unit 350, can obtain the Subtitle Demonstration timestamp scope of caption data to be played, during the Subtitle Demonstration Between stab scope and be used to indicate that caption data shows the time, specifically, synchronously corresponding plug-in unit 350 can be from the captions number for video caption According to PTS fields in the starting that obtains respectively for indicating caption data show that the very first time of time stabs, and for indicating word The termination of curtain data shows the second timestamp of time, and very first time stamp and the second timestamp can then determine above-mentioned captions Presentation Time Stamp scope.
Further, synchronously corresponding plug-in unit 350 judges whether above-mentioned video Presentation Time Stamp is in above-mentioned word to video caption In the range of curtain Presentation Time Stamp, specifically, synchronously corresponding plug-in unit 350 can first determine whether that above-mentioned video shows the time to video caption Whether stamp be not more than above-mentioned second timestamp, if so, then continuing to judge whether above-mentioned video Presentation Time Stamp is not less than above-mentioned the One timestamp, if so, can then determine that above-mentioned video Presentation Time Stamp is in above-mentioned Subtitle Demonstration timestamp scope.This is also To say, video data show the time be in caption data show time range in, namely need the synchronism output video data and The caption data.
Subsequently, in order to realize the synchronism output video data and the caption data, synchronously corresponding plug-in unit 350 can for video caption To establish the incidence relation between the video data and the caption data.
In an optional implementation, the corresponding plug-in unit 350 of video caption synchronization can be established in the form of form to be regarded Frequency evidence and the incidence relation between caption data.For example, as described in Table 1, closed between video data and caption data A kind of example of connection relation:
Table 1
In another optional implementation, synchronously corresponding plug-in unit 350 can use the shape of pointer or index to video caption The incidence relation that formula is established between video data and caption data, for example, it is determined that video data and caption data associate pass Can be that the video data sets a pointer, the pointer then points to the storage address of caption data associated with it after system.
It will be appreciated by persons skilled in the art that two kinds of implementations of foregoing description are as just citing, in reality In, can also have other and establish modes of incidence relation between video data and caption data, the application to this not It is restricted.
, can after the corresponding plug-in unit 350 of video caption synchronization establishes the incidence relation between the video data and the caption data So that the video data and the caption data are transmitted to next plug-in unit, i.e. video caption output inserter 360, with defeated by video caption Go out the synchronism output of plug-in unit 360 video data and caption data.
In addition, seen from the above description, in existing way, caption data is converted into by video caption integrated plugin 160 Subtitling image, then subtitling image and video image are synthesized, and in this course, caption data is converted into captions figure As needing to be related to multiple third-party storehouses, processing logic is complex;The processing that video image and subtitling image are synthesized Process will also take more system resource.
Based on this, proposed in the embodiment of the present application, two show layers are set (in Fig. 3 for video caption output inserter 360 It is not shown), two show layers are respectively used to show video data and caption data, and for convenience, the application is implemented Two show layers are referred to as the first show layers and the second show layers in example, and assume that the first show layers is used for Video data is shown, the second show layers is used to show caption data, it will be appreciated by persons skilled in the art that the second display Figure layer can be located above the first show layers.Specifically, video caption output inserter 360 receives, video caption is synchronously corresponding to be inserted After the video data that part 350 transmits, the associated caption data of the video data can be got according to above-mentioned incidence relation, and The video data is exported to the first show layers, the associated caption data is exported to the second show layers, so as to real The synchronism output of caption data and video data is showed.
It will be appreciated by persons skilled in the art that by setting above-mentioned two show layers, it is possible to achieve caption data Show with video data, so as to avoid caption data being converted into subtitling image, and avoid video image and captions respectively The processing procedure that image is synthesized, the efficiency that video is shown with captioning synchronization is improved, reduces the consumption to system resource.
In addition, in above process, if synchronously corresponding plug-in unit 350 judges to draw above-mentioned video Presentation Time Stamp video caption More than above-mentioned second timestamp, this is that is, the video caption video data that synchronously corresponding plug-in unit 350 is currently received shows Show that the time was located at after the display time for the caption data being currently received, that is, synchronously corresponding plug-in unit 350 is current for video caption Not receiving the video data being currently received with this also needs the caption data of simultaneous display, then, video caption is synchronously corresponding Plug-in unit 350 can continue to receive next caption data from caption decoding plug-in unit 350.
In addition, in above process, if synchronously corresponding plug-in unit 350 judges to draw above-mentioned video Presentation Time Stamp video caption Stabbed less than the above-mentioned very first time, this is that is, the video caption video data that synchronously corresponding plug-in unit 350 is currently received shows Show that the time was located at before the display time for the caption data being currently received, this is that is, video caption synchronously corresponds to plug-in unit 350 video datas being currently received do not have the caption data for needing simultaneous display, then, video caption synchronously corresponds to plug-in unit 350 directly can give the video data transmission being currently received to next plug-in unit, such as video caption output inserter 360.Accordingly , video caption output inserter 360 is after video data is received, if not getting the captions number associated with the video data According to then the caption data associated with the video data being set into null value.
As seen from the above-described embodiment, by before playing video data, i.e., showing the time according to the video of video data The Subtitle Demonstration timestamp scope of stamp and caption data, establish the incidence relation of video data and caption data, it is possible to achieve During playing video data, caption data associated with it can be found by the incidence relation, so as to improve video and captions The efficiency of simultaneous display, avoid video and show that process influences video playback fluency with captioning synchronization, lift Consumer's Experience.
So far, the description of embodiment one is completed.
Embodiment two:
Fig. 4 is referred to, another embodiment flow chart of the method shown for the embodiment of the present application video with captioning synchronization, This method describes emphatically video caption output inserter 260 is how to control second to show on the basis of method shown in above-mentioned Fig. 2 Diagram layer shows caption content, may comprise steps of:
Step 401:Judge associated caption data with it is preceding once export to the second show layers caption data whether It is identical, if identical, terminate flow, if differing, perform step 402;
Step 402:Judge whether associated caption data is null value, if null value, then perform step 404, if not Null value, then perform step 403;
Step 403:Associated caption data is exported to the second show layers, terminates flow;
Step 404:Empty the caption content currently shown on the second show layers.
It is as follows, step 401 and step 402 are described in detail:
After video caption output inserter 360 gets the caption data associated with the video data being currently received, During being particularly shown, it can first determine whether that the associated caption data is once exported to the captions of the second show layers with preceding Whether data are identical, if identical, illustrate that the video data is identical with caption data corresponding to former frame video data, then this When, video caption output inserter 260 can not process, so as to which shown caption content also will not on the second show layers Change.Handled by this kind, because caption data corresponding to different video frame may be identical, and in playing caption data When, also no longer the caption content shown in the second figure layer is switched over, so as to lift the viewing experience of user.
It will be appreciated by persons skilled in the art that video caption output inserter 260 exports a caption data every, can The caption data of the lower output of storage, is once exported before being got so as to video caption output inserter 260 to the second display The caption data of figure layer.
If judge to show that the associated caption data is once exported to the caption data not phase of the second show layers with preceding Together, then step 402 can be continued executing with.
In step 402, judge whether the associated caption data is null value, if null value, then perform step 404, Namely empty the caption content currently shown on the second show layers;If not null value, then can perform step 403, also will The associated caption data is exported to the second show layers, so as to which the second show layers can show caption content.
In addition, it will be appreciated by persons skilled in the art that step 401 in above-mentioned embodiment illustrated in fig. 4 is to step 404 As just citing, in actual applications, execution to step 401 to step 404 is simultaneously not specifically limited, for example, video words Curtain output inserter 260 can only perform step 402 to step 404, or only perform step 401, judge to draw phase in step 401 The caption data of association with it is preceding once export to the caption data of the second show layers differ when, directly perform above-mentioned steps 403.
As seen from the above-described embodiment, by before video caption output inserter exports caption data, first determining whether to treat defeated Whether the caption data gone out and the preceding caption data once exported are identical, if identical, are not repeated to export, if differing, Determine whether caption data to be output is null value, if null value, then directly empty second aobvious for show captions The caption content currently shown on diagram layer, if not null value, then by the caption data to be output in second show layers On shown, the efficiency of Subtitle Demonstration can be improved, while lift the viewing experience of user.
So far, the description of embodiment two is completed.
Show in order that obtaining those skilled in the art and the video that the application provides can be more clearly understood with captioning synchronization The method shown, with reference to above-described embodiment one and embodiment two, then show that this method is described in detail following embodiments three.
Embodiment three:
Fig. 5 is referred to, the further embodiment flow chart of the method shown for the embodiment of the present application video with captioning synchronization, This method may comprise steps of:
Step 501:Synchronously corresponding plug-in unit receives a video data to video caption, and obtains for indicating the video counts According to the video Presentation Time Stamp of display time.
Step 502:Synchronously corresponding plug-in unit receives a caption data to video caption, and obtains for indicating the captions number According to starting show the time the very first time stamp and for indicate the caption data termination show the time the second timestamp.
Step 503:Synchronously corresponding plug-in unit judges whether video Presentation Time Stamp is not more than the second timestamp to video caption, if It is then to perform step 504;If video Presentation Time Stamp is more than the second timestamp, returns and perform step 502.
Step 504:Synchronously corresponding plug-in unit judges whether video Presentation Time Stamp stabs not less than the very first time to video caption, if It is then to perform step 505;If video Presentation Time Stamp stabs less than the very first time, step 506 is performed.
Step 505:Establish the incidence relation of video data and caption data.
Step 506:By video data transmission to video caption output inserter.
Step 507:Video caption output inserter receives video data, and is obtained and the video data according to incidence relation Associated caption data.
Step 508:Video caption output inserter judges that associated caption data is once exported to the second display figure with preceding Whether the caption data of layer is identical, if identical, terminates flow, if differing, performs step 509.
Step 509:Video caption output inserter judges whether associated caption data is null value, if null value, is then held Row step 510;If null value, then step 511 is performed.
Step 510:Video caption output inserter exports associated caption data to the second show layers, terminates stream Journey.
Step 511:Video caption output inserter empties the caption content currently shown on the second show layers.
It is related in embodiment two that the detailed description of above-mentioned steps 501 to step 511 may refer to above-described embodiment one Description, the application will not be described in detail herein.
In addition, it will be appreciated by persons skilled in the art that the execution sequence of above-mentioned steps 501 to step 511 is only made For citing, in actual applications, there can also be other logical execution sequences, such as step 501 and step 502 can be same Shi Zhihang.
As seen from the above-described embodiment, by before playing video data, i.e., showing the time according to the video of video data The Subtitle Demonstration timestamp scope of stamp and caption data, establish the incidence relation of video data and caption data, it is possible to achieve During playing video data, caption data associated with it can be found by the incidence relation, so as to improve video and captions The efficiency of simultaneous display, avoid video and show that process influences video playback fluency with captioning synchronization, lift Consumer's Experience;Together When, due to subsequently output caption data during, by video caption output inserter export caption data before, first Judge whether caption data to be output and the preceding caption data once exported are identical, if identical, are not repeated to export, if not It is identical, then determine whether caption data to be output is null value, if null value, then directly empty for showing captions The caption content currently shown on second show layers, if not null value, then the caption data to be output second is shown at this Shown on diagram layer, handled by this kind, the efficiency that video is shown with captioning synchronization can be improved, reduced to system resource Consumption, lifted Consumer's Experience.
Corresponding with aforementioned video and the embodiment for the method that captioning synchronization is shown, present invention also provides video and captions The embodiment of the device of simultaneous display.
The embodiment for the device that the application video is shown with captioning synchronization can be applied on multimedia play equipment.Device Embodiment can be realized by software, can also be realized by way of hardware or software and hardware combining.Exemplified by implemented in software, It is the processor by multimedia play equipment where it by nonvolatile memory as the device on a logical meaning Corresponding computer program instructions read what operation in internal memory was formed.For hardware view, as shown in fig. 6, being the application A kind of hardware structure diagram of multimedia play equipment where the device that video is shown with captioning synchronization, except the processing shown in Fig. 6 Outside device 61, internal memory 62, network interface 63 and nonvolatile memory 64, the multimedia in embodiment where device Equipment can also include other hardware, this is repeated no more generally according to the actual functional capability of the multimedia play equipment.
Fig. 7 is refer to, one embodiment block diagram of the device shown for the embodiment of the present application video with captioning synchronization, the dress Putting to include:Acquisition module 71, establish module 72, output module 73.
Wherein, the acquisition module 71, can be used for obtaining the video for being used for indicating that video data to be played shows the time Presentation Time Stamp, and obtain for indicating that caption data to be played shows the Subtitle Demonstration timestamp scope of time;
This establishes module 72, if can be used for the video Presentation Time Stamp is in the Subtitle Demonstration timestamp scope It is interior, then establish the incidence relation of the video data and the caption data;
The output module 73, it can be used for according to the video data that the incidence relation synchronism output is associated and captions number According to.
In one embodiment, the acquisition module 71 is included (not shown in Fig. 7):
First acquisition submodule, it is used to indicate when originating show the time first of caption data to be played for obtaining Between stab;
Second acquisition submodule, it is used to indicate when terminating show the time second of caption data to be played for obtaining Between stab;
The video Presentation Time Stamp is in the Subtitle Demonstration timestamp scope, including:
The video Presentation Time Stamp stabs not less than the very first time, and is not more than second timestamp.
In one embodiment, described device can apply to synchronously broadcast based on the video caption that GStreamer frameworks create Put in pipeline;
In the video caption synchronously plays pipeline, video caption synchronously corresponding plug-in unit respectively with video decoding plug-in, Caption decoding plug-in unit, and video caption output inserter are connected;For receiving the video counts of the video decoding plug-in output According to the caption data exported with the caption decoding plug-in unit;
Establish the video data of video decoding plug-in output and caption decoding plug-in unit output caption data it Between incidence relation, and the video data and caption data are transmitted to the video caption output inserter;
The video caption output inserter, for the video data and word being associated according to the incidence relation synchronism output Curtain data.
In one embodiment, the video caption output inserter, specifically can be used for:When receiving, the video caption is same After the video data of the corresponding plug-in unit transmission of step, the captions associated with the video data received are obtained according to the incidence relation Data;
The video data received is exported to the first show layers, so that described in first show layers shows The video data received;And
The associated caption data is exported to the second show layers, so that described in second show layers shows Associated caption data;
Wherein, second show layers is located above first show layers.
In one embodiment, it is described to export the associated caption data to the second show layers, it can include:
Judge whether the associated caption data is null value;
If the associated caption data is null value, empty in the captions currently shown on second show layers Hold.
In one embodiment, it is described to export the associated caption data to the second show layers, it can include:
Judge whether the associated caption data and the preceding caption data once exported to the second show layers are identical;
If differing, the associated caption data is exported to second show layers.
The function of unit and the implementation process of effect specifically refer to and step are corresponded in the above method in said apparatus Implementation process, it will not be repeated here.
For device embodiment, because it corresponds essentially to embodiment of the method, so related part is real referring to method Apply the part explanation of example.Device embodiment described above is only schematical, wherein described be used as separating component The unit of explanation can be or may not be physically separate, can be as the part that unit is shown or can also It is not physical location, you can with positioned at a place, or can also be distributed on multiple NEs.Can be according to reality Need to select some or all of module therein to realize the purpose of application scheme.Those of ordinary skill in the art are not paying In the case of going out creative work, you can to understand and implement.
The preferred embodiment of the application is the foregoing is only, not limiting the application, all essences in the application God any modification, equivalent substitution and improvements done etc., should be included within the scope of the application protection with principle.

Claims (10)

1. a kind of method that video is shown with captioning synchronization, it is characterised in that methods described includes:
Obtain for indicating that video data to be played shows the video Presentation Time Stamp of time, and obtain to be played for indicating Caption data show the time Subtitle Demonstration timestamp scope;
If the video Presentation Time Stamp is in the Subtitle Demonstration timestamp scope, establish the video data with it is described The incidence relation of caption data;
The video data and caption data being associated according to the incidence relation synchronism output.
2. according to the method for claim 1, it is characterised in that described to obtain for indicating that caption data to be played is shown The Subtitle Demonstration timestamp scope of time, including:
The very first time stamp that the starting for indicating caption data to be played shows the time is obtained respectively, it is and described for indicating The termination of caption data shows the second timestamp of time;
The video Presentation Time Stamp is in the Subtitle Demonstration timestamp scope, including:
The video Presentation Time Stamp stabs not less than the very first time, and is not more than second timestamp.
3. method according to claim 1 or 2, it is characterised in that methods described is applied to create based on GStreamer frameworks The video caption built synchronously is played in pipeline;
In the video caption synchronously plays pipeline, video caption synchronously corresponding plug-in unit respectively with video decoding plug-in, captions Decoding plug-in, and video caption output inserter are connected;For receive the video data of video decoding plug-in output with The caption data of the caption decoding plug-in unit output;
Establish between the video data of the video decoding plug-in output and the caption data of caption decoding plug-in unit output Incidence relation, and the video data and caption data are transmitted to the video caption output inserter;
The video caption output inserter, for according to the video data that the incidence relation synchronism output is associated and captions number According to.
4. according to the method for claim 3, it is characterised in that the video caption output inserter, be specifically used for:Work as reception To after the video caption synchronously video data of corresponding plug-in unit transmission, the video with receiving is obtained according to the incidence relation The associated caption data of data;
The video data received is exported to the first show layers, so that first show layers shows the reception The video data arrived;And
The associated caption data is exported to the second show layers, so that second show layers shows the correlation The caption data of connection;
Wherein, second show layers is located above first show layers.
5. according to the method for claim 4, it is characterised in that described to export the associated caption data to second Show layers, including:
Judge whether the associated caption data is null value;
If the associated caption data is null value, the caption content currently shown on second show layers is emptied.
6. according to the method for claim 4, it is characterised in that described to export the associated caption data to second Show layers, including:
Judge whether the associated caption data and the preceding caption data once exported to the second show layers are identical;
If differing, the associated caption data is exported to second show layers.
7. the device that a kind of video is shown with captioning synchronization, it is characterised in that described device includes:
Acquisition module, it is used to indicate that video data to be played shows the video Presentation Time Stamp of time for obtaining, and obtains For indicating that caption data to be played shows the Subtitle Demonstration timestamp scope of time;
Module is established, if being in for the video Presentation Time Stamp in the Subtitle Demonstration timestamp scope, described in foundation The incidence relation of video data and the caption data;
Output module, for the video data and caption data being associated according to the incidence relation synchronism output.
8. device according to claim 7, it is characterised in that the acquisition module includes:
First acquisition submodule, it is used to indicate that the starting of caption data to be played to show the very first time of time for obtaining Stamp;
Second acquisition submodule, it is used to indicate that the termination of caption data to be played to show the second time of time for obtaining Stamp;
The video Presentation Time Stamp is in the Subtitle Demonstration timestamp scope, including:
The video Presentation Time Stamp stabs not less than the very first time, and is not more than second timestamp.
9. the device according to claim 7 or 8, it is characterised in that described device is applied to create based on GStreamer frameworks The video caption built synchronously is played in pipeline;
In the video caption synchronously plays pipeline, video caption synchronously corresponding plug-in unit respectively with video decoding plug-in, captions Decoding plug-in, and video caption output inserter are connected;For receive the video data of video decoding plug-in output with The caption data of the caption decoding plug-in unit output;
Establish between the video data of the video decoding plug-in output and the caption data of caption decoding plug-in unit output Incidence relation, and the video data and caption data are transmitted to the video caption output inserter;
The video caption output inserter, for according to the video data that the incidence relation synchronism output is associated and captions number According to.
10. device according to claim 9, it is characterised in that the video caption output inserter, be specifically used for:When connecing After receiving the video caption video data that synchronously corresponding plug-in unit transmits, obtained according to the incidence relation and regarded with what is received Frequency is according to associated caption data;
The video data received is exported to the first show layers, so that first show layers shows the reception The video data arrived;And
The associated caption data is exported to the second show layers, so that second show layers shows the correlation The caption data of connection;
Wherein, second show layers is located above first show layers.
CN201711145357.9A 2017-11-17 2017-11-17 The method and device that video is shown with captioning synchronization Pending CN107864393A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711145357.9A CN107864393A (en) 2017-11-17 2017-11-17 The method and device that video is shown with captioning synchronization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711145357.9A CN107864393A (en) 2017-11-17 2017-11-17 The method and device that video is shown with captioning synchronization

Publications (1)

Publication Number Publication Date
CN107864393A true CN107864393A (en) 2018-03-30

Family

ID=61702089

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711145357.9A Pending CN107864393A (en) 2017-11-17 2017-11-17 The method and device that video is shown with captioning synchronization

Country Status (1)

Country Link
CN (1) CN107864393A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111092991A (en) * 2019-12-20 2020-05-01 广州酷狗计算机科技有限公司 Lyric display method and device and computer storage medium
CN111193965A (en) * 2020-01-15 2020-05-22 北京奇艺世纪科技有限公司 Video playing method, video processing method and device
CN113473045A (en) * 2020-04-26 2021-10-01 海信集团有限公司 Subtitle adding method, device, equipment and medium
CN114302215A (en) * 2021-12-29 2022-04-08 北京奕斯伟计算技术有限公司 Video data stream decoding system, method, electronic device, and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101197946A (en) * 2006-12-06 2008-06-11 中兴通讯股份有限公司 Video and word synchronizing apparatus
US20090214178A1 (en) * 2005-07-01 2009-08-27 Kuniaki Takahashi Reproduction Apparatus, Video Decoding Apparatus, and Synchronized Reproduction Method
CN105979169A (en) * 2015-12-15 2016-09-28 乐视网信息技术(北京)股份有限公司 Video subtitle adding method, device and terminal
CN106792114A (en) * 2016-12-06 2017-05-31 深圳Tcl数字技术有限公司 The changing method and device of captions
CN106851401A (en) * 2017-03-20 2017-06-13 惠州Tcl移动通信有限公司 A kind of method and system of automatic addition captions

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090214178A1 (en) * 2005-07-01 2009-08-27 Kuniaki Takahashi Reproduction Apparatus, Video Decoding Apparatus, and Synchronized Reproduction Method
CN101197946A (en) * 2006-12-06 2008-06-11 中兴通讯股份有限公司 Video and word synchronizing apparatus
CN105979169A (en) * 2015-12-15 2016-09-28 乐视网信息技术(北京)股份有限公司 Video subtitle adding method, device and terminal
CN106792114A (en) * 2016-12-06 2017-05-31 深圳Tcl数字技术有限公司 The changing method and device of captions
CN106851401A (en) * 2017-03-20 2017-06-13 惠州Tcl移动通信有限公司 A kind of method and system of automatic addition captions

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111092991A (en) * 2019-12-20 2020-05-01 广州酷狗计算机科技有限公司 Lyric display method and device and computer storage medium
CN111193965A (en) * 2020-01-15 2020-05-22 北京奇艺世纪科技有限公司 Video playing method, video processing method and device
CN113473045A (en) * 2020-04-26 2021-10-01 海信集团有限公司 Subtitle adding method, device, equipment and medium
CN114302215A (en) * 2021-12-29 2022-04-08 北京奕斯伟计算技术有限公司 Video data stream decoding system, method, electronic device, and medium
CN114302215B (en) * 2021-12-29 2023-09-29 北京奕斯伟计算技术股份有限公司 Video data stream decoding system, method, electronic device and medium

Similar Documents

Publication Publication Date Title
US6512552B1 (en) Subpicture stream change control
CN107864393A (en) The method and device that video is shown with captioning synchronization
US6297797B1 (en) Computer system and closed caption display method
CA2772021C (en) Storage medium having interactive graphic stream and apparatus for reproducing the same
JP4985807B2 (en) Playback apparatus and playback method
KR101339535B1 (en) Contents reproduction device and recording medium
CN110364189A (en) Transcriber and reproducting method
JP2010250562A (en) Data structure, recording medium, playback apparatus, playback method, and program
JP2010028283A (en) Video processing device and control method therefor
KR100604831B1 (en) Audio and video player synchronizing ancillary word and image to audio and method thereof
CN109379619A (en) Sound draws synchronous method and device
CN103248941A (en) Multi-channel video source synchronous display method and device
CN106463150A (en) Recording medium, playback method, and playback device
WO2010119814A1 (en) Data structure, recording medium, reproducing device, reproducing method, and program
US6587635B1 (en) Subpicture master control
CN107580264A (en) Multimedia resource play handling method and device
CN105847990B (en) The method and apparatus for playing media file
EP1753235A2 (en) Apparatus and method for displaying a secondary video signal with a primary video signal
CN113542907A (en) Multimedia data receiving and transmitting method, system, processor and player
CN108966000A (en) Playing method and device, medium and terminal thereof
KR20050088486A (en) Method of creating vobu in hd-dvd system
US20100253848A1 (en) Displaying image frames in combination with a subpicture frame
CN108012176A (en) A kind of data switching method, device and terminal
US8213778B2 (en) Recording device, reproducing device, recording medium, recording method, and LSI
JP2010252055A (en) Data structure, recording medium, reproducing device and reproducing method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 266555 Qingdao economic and Technological Development Zone, Shandong, Hong Kong Road, No. 218

Applicant after: Hisense Visual Technology Co., Ltd.

Address before: 266555 Qingdao economic and Technological Development Zone, Shandong, Hong Kong Road, No. 218

Applicant before: QINGDAO HISENSE ELECTRONICS Co.,Ltd.

CB02 Change of applicant information
RJ01 Rejection of invention patent application after publication

Application publication date: 20180330

RJ01 Rejection of invention patent application after publication