Nothing Special   »   [go: up one dir, main page]

CN112887806A - Subtitle processing method, subtitle processing device, electronic equipment and subtitle processing medium - Google Patents

Subtitle processing method, subtitle processing device, electronic equipment and subtitle processing medium Download PDF

Info

Publication number
CN112887806A
CN112887806A CN202110097666.3A CN202110097666A CN112887806A CN 112887806 A CN112887806 A CN 112887806A CN 202110097666 A CN202110097666 A CN 202110097666A CN 112887806 A CN112887806 A CN 112887806A
Authority
CN
China
Prior art keywords
subtitle
caption
video
data
subtitles
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110097666.3A
Other languages
Chinese (zh)
Inventor
余锋
金凌琳
李迈
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dangqu Network Technology Hangzhou Co Ltd
Original Assignee
Dangqu Network Technology Hangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dangqu Network Technology Hangzhou Co Ltd filed Critical Dangqu Network Technology Hangzhou Co Ltd
Priority to CN202110097666.3A priority Critical patent/CN112887806A/en
Publication of CN112887806A publication Critical patent/CN112887806A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/278Subtitling

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Circuits (AREA)

Abstract

The invention discloses a subtitle processing method, a subtitle processing device, electronic equipment and a subtitle processing medium, relates to the technical field of multimedia, and is used for solving the problem that in the related technology, due to the fact that a subtitle library of a server side is added manually by a worker regularly, real-time performance is not high. Wherein, the method comprises the following steps: acquiring a first video, and recording subtitles carried by the first video as first subtitles; extracting feature data of a first video and recording the feature data as the first feature data, wherein the feature data comprise the name and duration of the video; and uploading the first caption and the first characteristic data, wherein the first caption is stored in a caption library of the server and is set in association with the first characteristic data. The method has the advantage of improving the real-time property of the subtitle library.

Description

Subtitle processing method, subtitle processing device, electronic equipment and subtitle processing medium
Technical Field
The present invention relates to the field of multimedia technologies, and in particular, to a method and an apparatus for processing subtitles, an electronic device, and a medium.
Background
Video playing is a very popular application on the current network, and for video contents such as foreign movies and television series, subtitles are an important part of video playing, so that on one hand, audiences who do not know the language in the video can hear the sound in the original video and understand the video contents, and on the other hand, audiences who have weak hearing can effectively help the audiences to understand the video contents.
In the related art, when a client plays a video in an external device, if the video has no subtitles, the client needs to obtain subtitles matched with the video from a server and cooperatively use the subtitles in the video playing process, but a subtitle library of the server is usually added manually by a worker at regular intervals, so that the real-time performance of the subtitle library is not high.
At present, no effective solution is provided for the problem of low real-time performance caused by the fact that a caption library of a server side is periodically and manually added by a worker in the related technology.
Disclosure of Invention
In order to overcome the disadvantages of the related art, an object of the present invention is to provide a method, an apparatus, an electronic device, and a medium for processing subtitles, which improve the real-time performance of a subtitle library.
One of the purposes of the invention is realized by adopting the following technical scheme:
a subtitle processing method, comprising:
acquiring a first video, and recording subtitles carried by the first video as first subtitles;
extracting feature data of the first video and recording the feature data as first feature data, wherein the feature data comprise the name and duration of the video;
and uploading the first caption and the first characteristic data, wherein the first caption is stored in a caption library of a server and is set in association with the first characteristic data.
In some embodiments, the first subtitle carries subtitle production data; before uploading the first subtitle, the method further includes:
integrating the caption making data and the first characteristic data into data to be detected and uploading the data, judging whether captions matched with the data to be detected exist in the caption library by the server side, if so, generating a deleting signal and issuing the deleting signal, and if not, generating an uploading signal and issuing the uploading signal;
deleting the first subtitle in response to the deletion signal in case of receiving the deletion signal;
and under the condition of receiving the uploading signal, responding to the uploading signal to upload the first caption.
In some embodiments, the first subtitle carries subtitle production data; after uploading the first subtitle, the method further includes:
and the server integrates and records the caption making data and the first characteristic data into data to be detected, judges whether captions matched with the data to be detected exist in the caption library, and deletes the first captions from the caption library if the captions matched with the data to be detected exist in the caption library.
In some embodiments, for any subtitle in the subtitle library, the name of the subtitle includes associated feature data.
In some of these embodiments, the method further comprises:
acquiring a second video, extracting characteristic data of the second video and recording the characteristic data as second characteristic data, wherein the second video does not carry subtitles;
uploading the second characteristic data, acquiring subtitles associated with the second characteristic data from the subtitle library by the server side and marking the subtitles as target subtitles, and marking the target subtitles with the highest use times, the highest use score or the highest uploading times as selected subtitles by the server side and issuing the selected subtitles;
and receiving the selected subtitle and using the selected subtitle in cooperation with the second video.
In some of these embodiments, the first subtitle is the selected subtitle of the first video; under the condition that the selected subtitle corresponding to the second video is determined according to the using times, the method further comprises the following steps:
and judging whether the use duration of the selected caption is greater than a preset value, if so, generating a second updating signal and uploading, and the server side responds to the second updating signal and cooperatively updates the use times of the selected caption.
In some of these embodiments, the method further comprises:
generating and uploading a caption replacement signal, and responding to the caption replacement signal by the server to obtain a corresponding caption from the caption library, recording the caption as a self-selected caption and issuing the self-selected caption;
controlling the self-selected subtitles to be matched with the corresponding videos for use;
and judging whether the use duration of the self-selected caption is greater than the preset value and is greater than or equal to the use duration of the selected caption, if so, forbidding generating or uploading the second updating signal after the corresponding video playing is finished, generating and uploading a second updating signal, and responding to the second updating signal by the server end to update the use times of the self-selected caption in a matching manner.
The second purpose of the invention is realized by adopting the following technical scheme:
a subtitle processing apparatus comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a first video, and subtitles carried by the first video are marked as first subtitles;
the extraction module is used for extracting the feature data of the first video and recording the feature data as the first feature data, wherein the feature data comprises the name and the duration of the video;
and the uploading module is used for uploading the first caption and the first characteristic data, wherein the first caption is stored in a caption library of a server and is set in association with the first characteristic data.
It is a further object of the invention to provide an electronic device performing one of the objects of the invention, comprising a memory in which a computer program is stored and a processor arranged to carry out the method described above when executing the computer program.
It is a fourth object of the present invention to provide a computer readable storage medium storing one of the objects of the invention, having stored thereon a computer program which, when executed by a processor, implements the method described above.
Compared with the related technology, the invention has the beneficial effects that: for videos carrying subtitles, the subtitles can be automatically uploaded to a subtitle library of a service end, on one hand, the automatic uploading of the subtitles can improve the real-time performance of the subtitle library, and on the other hand, the continuous uploading of the subtitles can enrich the subtitle library, namely improve the comprehensiveness of the subtitle library; the invention also establishes the association between the feature data of the video and the subtitles so as to obtain the associated subtitles according to the feature data.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a flowchart of a subtitle processing method according to an embodiment of the present application;
fig. 2 is a flowchart of a subtitle issuing step according to a second embodiment of the present application;
fig. 3 is a block diagram of a subtitle processing apparatus according to a fourth embodiment of the present application;
fig. 4 is a block diagram of an electronic device according to a fifth embodiment of the present application.
Description of the drawings: 31. an acquisition module; 32. an extraction module; 33. and an uploading module.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application.
It will be appreciated that such a development effort might be complex and tedious, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure, and is not intended to limit the scope of this disclosure.
Example one
The embodiment provides a subtitle processing method, and aims to solve the problem that in the related art, due to the fact that a subtitle library of a server is added manually by a worker regularly, instantaneity is not high.
Fig. 1 is a flowchart of a subtitle processing method according to an embodiment of the present application, and referring to fig. 1, the method includes steps S101 to S103.
Step S101, a client acquires a first video, and marks subtitles carried by the first video as first subtitles. It should be understood that the client is in communication connection with the server, and the client and the server may be in many-to-one, and the client is a terminal system and may be installed in terminal devices such as a mobile phone, a tablet, a smart television, a set-top box, and a projector, where the first video is sent to the terminal device by an external device and played by the client, where it is worth explaining that, in a case where one terminal device executes the method, other terminal devices may all be regarded as external devices.
Step S102, the client extracts the characteristic data of the first video and records the characteristic data as first characteristic data. The feature data includes a name and a duration of the corresponding video, that is, the first feature data includes a name and a duration of the first data.
Step S103, uploading the first subtitle and the first feature data by the client. It is to be understood that, in the case that the server receives the first subtitle, the first subtitle is stored in the subtitle library, and it is worth to be noted here that the server may be a server, a cloud server, a processor, and the like. The association between the first caption and the first feature data may be presented in a table, that is, the server may determine the caption corresponding to the feature data by table lookup. The establishment of the association between the first subtitle and the first feature data may be performed at the client or at the server, which is not limited herein, and the feature data and the subtitle may be in one-to-many form, that is, the feature data is used as a search condition at the server, more than one subtitle may be obtained, and the selection is performed in more than one subtitle for issuing.
Because the video has a plurality of versions, the versions can be generally expressed on the video duration, and the video subtitles of each version are different and cannot be commonly used, the subtitles corresponding to the video are searched through the feature data with the names and the duration, and the searching accuracy can be improved.
In summary, for videos with subtitles, the subtitles can be automatically uploaded to a subtitle library at a server side, on one hand, the automatic uploading of the subtitles can improve the real-time performance of the subtitle library, and on the other hand, the continuous uploading of the subtitles can enrich the subtitle library, i.e., improve the comprehensiveness of the subtitle library; the invention also establishes the association between the feature data of the video and the subtitles so as to obtain the associated subtitles according to the feature data.
As an alternative embodiment, the feature data may also include the format of the video, i.e. the first feature data also includes the format of the first video. It can be understood that, in the case of a difference in format, the formats of the corresponding subtitles may also be different, and the technical solution can reduce the risk of using the subtitles erroneously.
As an alternative implementation, for any subtitle in the subtitle library, the name of the subtitle includes associated feature data. It should be noted that, when the client extracts the first subtitle, the name of the first subtitle is usually a messy code or a numeric code, and when the server/client acquires the first subtitle, the server/client generates the name of the first subtitle according to a preset format from the first feature data, so as to reduce the risk of abnormal searching operation and facilitate manual checking by a worker.
Further, the renaming step of the first caption is preferably executed at the server side to pool the entire caption library condition, so as to reduce the risk that the first caption names are the same under the same first characteristic data. Specifically, when receiving the first subtitle, the service determines, according to the first feature data, a sequence number of the first subtitle, where the sequence number may be set in a one-to-one correspondence with the first subtitle under the same feature data, for example: the sequence number may represent an nth first subtitle received by the server under the same first feature data; and then the server side renames the first feature data and the sequence number of the first caption.
As an optional implementation manner, the method may further include a deduplication step, where the deduplication step is performed before the first subtitle is uploaded to the server, and specifically, the deduplication step may include the following steps.
And integrating the subtitle making data and the first characteristic data into data to be detected by the client and uploading the data. It should be noted that the first subtitle carries subtitle making data, which may include a maker and a making method. It is understood that videos having the same feature data are the same video, and therefore, two subtitles having the same subtitle production data can be regarded as the same subtitle with a high probability.
The server side judges whether a subtitle matched with the data to be detected exists in the subtitle library, if so, the subtitle library is indicated to have the same subtitle as the first subtitle, so that the subtitle needs to be deleted, and correspondingly, the server side generates a deletion signal and sends the deletion signal to the client side; if not, the caption library does not have the same caption as the first caption, so the caption needs to be uploaded, and correspondingly, the server generates and issues an uploading signal.
Under the condition that the client end receives the deleting signal, the client end deletes the first caption in response to the deleting signal; and the client responds to the uploading signal to upload the first caption under the condition of receiving the uploading signal.
Through the technical scheme, the duplication removal of the subtitle library can be realized, so that the only subtitles in the subtitle library are ensured, and the space occupied by the subtitle library is reduced. It can be understood that the transmission efficiency and the occupied memory of the data to be detected are better than those of the first caption, so that the duplicate removal efficiency is improved by delaying the uploading of the first caption.
As an optional embodiment, the deduplication step is different from the above-mentioned embodiment, and is performed after the client uploads the first subtitle to the server, and specifically, the deduplication step may include the following steps.
The server integrates and records the subtitle making data and the first feature data as data to be detected, and specific description of the data to be detected is not repeated herein, and reference can be made to the description; and the server side judges whether the caption matched with the data to be detected exists in the caption library, and if so, deletes the first caption from the caption library.
Through the technical scheme, the duplication removal of the subtitle library can be realized, so that the only subtitles in the subtitle library are ensured, and the space occupied by the subtitle library is reduced. It can be understood that the client only needs to perform the uploading operation, so that the system requirement on the client is low, and the server deletes the subtitles in the subtitle library, thereby improving the accuracy of deletion. It should be noted that the uploading time of the data to be detected is earlier than or equal to the uploading time of the first subtitle, and the two times are preferably equal and are uploaded after integration, so as to reduce the risk of information error and leakage.
It will be appreciated that the steps illustrated in the flowcharts described above or in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than here.
Example two
The second embodiment provides a subtitle processing method, and the second embodiment is performed on the basis of the first embodiment. The method may further include a subtitle issuing step, fig. 2 is a flowchart of the subtitle issuing step in the second embodiment of the present application, and referring to fig. 1 and fig. 2, the subtitle issuing step may include steps S201 to S204.
Step S201, the client acquires a second video, extracts feature data of the second video and records the feature data as second feature data, where the second video does not carry subtitles. Accordingly, the second feature data includes the name and duration of the second video, and the like.
Step S202, the client uploads the second characteristic data to the server.
And S203, the server side acquires subtitles associated with the second characteristic data from a subtitle library and marks the subtitles as target subtitles, and the server side marks the target subtitles with the largest use times, the highest use score or the largest uploading times as selected subtitles and issues the selected subtitles.
It can be understood that the server side and the query with the second feature data as a condition can obtain more than one target caption from the caption library, and each target caption can have any one or more combinations of the number of times of use, the usage score and the number of times of uploading, and can mainly meet the issuing condition.
And step S204, receiving the selected caption, and using the selected caption in cooperation with the second video. Reference may be made to the prior art, which is not described herein in detail.
By the technical scheme, the subtitle can be issued on the basis of establishing the subtitle library, and the issuing of the subtitle refers to any one of the use times, the use score and the uploading times, so that the second subtitle is more attached to the second video, and the watching experience of a user is improved.
EXAMPLE III
A third embodiment provides a subtitle processing method, which is performed on the basis of the second embodiment. Specifically, under the condition that the selected subtitle of the second video is determined according to the number of times of use, correspondingly, for the first video carrying the first subtitle, the carried first subtitle can be regarded as the selected subtitle. Specifically, the method may further include the following steps.
The client side judges whether the using time length of the selected caption is larger than a preset value or not, if yes, a second updating signal is generated and uploaded, and the server side responds to the second updating signal and updates the using times of the selected caption in a matched mode.
It is to be understood that the present solution is applicable not only to the second video, but also to the first video. It should be noted that the preset value may be a specific time, for example: for 10 min. However, in consideration of the disparity of the video duration, the preset value may also be obtained according to the video duration and the preset schedule ratio, for example: the preset progress ratio is as follows: and 50%, the preset value is 60min under the condition that the video time length is 120 min. Of course, the above two cases can be used in combination, and are not limited herein.
Further, the server issues subtitle information corresponding to the video to the client, wherein the subtitle information includes an entry, and the subtitle information does not include an entry corresponding to a subtitle being used by the client. For any item, it may include: and one or more combinations of caption names, use times, use scores and uploading times are adopted, so that the user can know the information of other captions through the client side conveniently, and can select the captions better.
Correspondingly, the user selects items by himself through the client, the client generates a subtitle updating signal and uploads the subtitle updating signal to the server, and the server responds to the subtitle replacing signal to obtain corresponding subtitles from a subtitle library, records the subtitles as self-selected subtitles and issues the subtitles.
The client controls the use of the self-selected captions in cooperation with the corresponding videos. Correspondingly, the subtitle information sent by the server is also updated in a matching way.
And the client judges whether the use duration of the self-selected subtitle is greater than a preset value, if so, after the corresponding video playing is finished, the generation or uploading of a second updating signal is forbidden, a second updating signal is generated and uploaded, and the server responds to the second updating signal to update the use times of the self-selected subtitle in a matched manner.
By the technical scheme, only one subtitle is updated by using times for one-time playing of the video, so that updating confusion is avoided, and credit of the subtitle can be better reflected. The preset value here preferably adopts a combination of a video duration and a preset progress ratio, and the preset progress ratio satisfies [0.4,0.6], only one second update signal and/or one second update signal can be obtained for the same video, and only one second update signal or one second update signal can be obtained for the same video if the preset progress ratio satisfies [0.5,0.6 ].
Further, the server may clean the caption end by any one or more combinations of the number of uses, the usage score, the number of uploads and the usage rate, for example: for the same feature data, the server deletes the last N subtitles according to the use scores or deletes the last N subtitles according to the use rates.
Further, in order to ensure the number of subtitles associated with the same feature data, the subtitle cleaning step is preferably performed when the number of subtitles is less than a value D, preferably 5. In order to avoid that the subtitles are cleaned shortly after being stored in the subtitle library, the subtitle cleaning step is here preferably performed on the basis of the usage score.
As an alternative implementation, the method may further comprise a clipping step, specifically the clipping step is intended to comprise the following steps.
And the client acquires the subtitle currently used by the video and records the subtitle as the current subtitle. Correspondingly, the video is recorded as a current video, and the current subtitle may be carried by the current video or issued by a server.
The client receives a trigger signal, the trigger signal carries data to be processed, the progress of the current caption is adjusted according to the data to be processed in a matching mode in response to the trigger signal, and the data to be processed is uploaded. The data to be processed may include: adjusting time point, adjusting direction and adjusting unit, wherein the adjusting direction is fast forward and backward. It can be understood that the client may refer to the prior art for adjusting the current subtitle, and details are not described herein.
And under the condition that the current caption is used, the client judges whether the time difference between the first adjustment and the use end of the current caption exceeds a threshold value, and if so, the subsequent clipping operation is executed. The threshold may be a specific value, for example 10 min. It will be appreciated that for current video, there may be multiple adjustments, with corresponding sets of data to be processed.
And the server side coordinates to clip the current caption according to the data to be processed and stores the caption obtained after clipping in a caption library. For example, if the data to be processed at one time is: in the 10 th min, fast forwarding, 0.5s, the server puts a blank data packet of 0.5s at the 10 th min where the caption is added, and it is worth explaining that, for the existence of multiple sets of data to be processed, the adjusting time points are all referred to the original progress line of the caption.
Under the condition that the current subtitle is not matched with the video, on one hand, the client performs progress adjustment on the current subtitle according to the data to be processed to ensure user experience, and on the other hand, the server clips the current subtitle according to the data to be processed to enable the clipped subtitle to be more matched with the current video, and therefore instantaneity and effectiveness of a subtitle library are improved.
As an optional implementation manner, the server uses the basic information of the current subtitle as the basic information of the clipped subtitle, and deletes the current subtitle from the subtitle library. The basic information may include one or more combinations of the number of times of use, the usage score, the number of times of upload, and the like, as long as it can be adapted to the delivery conditions and the like.
Example four
The fourth embodiment provides a subtitle processing apparatus, which is the virtual apparatus structure of the foregoing embodiments. Fig. 3 is a block diagram of a subtitle processing apparatus according to a fourth embodiment of the present application, and referring to fig. 3, the apparatus includes: an acquisition module 31, an extraction module 32, and an upload module 33.
The acquiring module 31 is configured to acquire a first video, where a subtitle carried by the first video is recorded as a first subtitle;
the extracting module 32 is configured to extract feature data of a first video and record the feature data as the first feature data, where the feature data includes a name and a duration of the video;
and an uploading module 33, configured to upload a first subtitle and first feature data, where the first subtitle is stored in a subtitle library of the server and is set in association with the first feature data.
The above modules may be functional modules or program modules, and may be implemented by software or hardware. For a module implemented by hardware, the modules may be located in the same processor; or the modules can be respectively positioned in different processors in any combination.
EXAMPLE five
In a fifth embodiment, an electronic device is provided, fig. 4 is a block diagram of a structure of the electronic device shown in the fifth embodiment of the present application, and as shown in fig. 4, the electronic device includes a memory and a processor, where the memory stores a computer program, and the processor is configured to run the computer program to execute any one of the subtitle processing methods for implementing the foregoing embodiment.
Optionally, the electronic device may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
In addition, with reference to the subtitle processing method in the foregoing embodiment, a fifth embodiment of the present application may be implemented by providing a storage medium. The storage medium having stored thereon a computer program; the computer program, when executed by a processor, implements any one of the subtitle processing methods in the above embodiments, the method comprising:
acquiring a first video, and recording subtitles carried by the first video as first subtitles;
extracting feature data of a first video and recording the feature data as the first feature data, wherein the feature data comprise the name and duration of the video;
and uploading the first caption and the first characteristic data, wherein the first caption is stored in a caption library of the server and is set in association with the first characteristic data.
As shown in fig. 4, taking a processor as an example, the processor, the memory, the input device and the output device in the electronic device may be connected by a bus or other means, and fig. 4 takes the connection by the bus as an example.
The memory, which is a computer-readable storage medium, may include a high-speed random access memory, a non-volatile memory, and the like, and may be used to store an operating system, a software program, a computer-executable program, and a database, such as program instructions/modules corresponding to the subtitle processing method according to an embodiment of the present invention, and may further include a memory, which may be used to provide a running environment for the operating system and the computer program. In some examples, the memory may further include memory located remotely from the processor, and these remote memories may be connected to the electronic device through a network.
The processor, which is used to provide computing and control capabilities, may include a Central Processing Unit (CPU), or A Specific Integrated Circuit (ASIC), or may be configured to implement one or more Integrated circuits of embodiments of the present Application. The processor executes various functional applications and data processing of the electronic device by running the computer-executable program, software program, instructions and modules stored in the memory, that is, the subtitle processing method of the first embodiment is realized.
The output device of the electronic equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the electronic equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on a shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
The electronic device may further include a network interface/communication interface, the network interface of the electronic device being for communicating with an external terminal through a network connection. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Those skilled in the art will appreciate that the structure shown in fig. 4 is a block diagram of only a portion of the structure relevant to the present application, and does not constitute a limitation on the electronic device to which the present application is applied, and a particular electronic device may include more or less components than those shown in the drawings, or combine certain components, or have a different arrangement of components.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), synchronous link (Synchlink), DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It should be noted that, in the embodiment of the subtitle processing method, each included unit and module are only divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
Unless defined otherwise, technical or scientific terms referred to herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The terms "comprises," "comprising," "including," "has," "having," and any variations thereof, as referred to herein, are intended to cover a non-exclusive inclusion. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as referred to herein means two or more. "and/or" describe the association relationship of the associated objects, meaning that three relationships may exist. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method for processing subtitles, the method comprising:
acquiring a first video, and recording subtitles carried by the first video as first subtitles;
extracting feature data of the first video and recording the feature data as first feature data, wherein the feature data comprise the name and duration of the video;
and uploading the first caption and the first characteristic data, wherein the first caption is stored in a caption library of a server and is set in association with the first characteristic data.
2. The method of claim 1, wherein the first subtitle carries subtitle production data; before uploading the first subtitle, the method further includes:
integrating the caption making data and the first characteristic data into data to be detected and uploading the data, judging whether captions matched with the data to be detected exist in the caption library by the server side, if so, generating a deleting signal and issuing the deleting signal, and if not, generating an uploading signal and issuing the uploading signal;
deleting the first subtitle in response to the deletion signal in case of receiving the deletion signal;
and under the condition of receiving the uploading signal, responding to the uploading signal to upload the first caption.
3. The method of claim 1, wherein the first subtitle carries subtitle production data; after uploading the first subtitle, the method further includes:
and the server integrates and records the caption making data and the first characteristic data into data to be detected, judges whether captions matched with the data to be detected exist in the caption library, and deletes the first captions from the caption library if the captions matched with the data to be detected exist in the caption library.
4. The method of claim 1, wherein for any subtitle in the subtitle library, the associated feature data is included in the name of the subtitle.
5. The method according to any one of claims 1 to 4, further comprising:
acquiring a second video, extracting characteristic data of the second video and recording the characteristic data as second characteristic data, wherein the second video does not carry subtitles;
uploading the second characteristic data, acquiring subtitles associated with the second characteristic data from the subtitle library by the server side and marking the subtitles as target subtitles, and marking the target subtitles with the highest use times, the highest use score or the highest uploading times as selected subtitles by the server side and issuing the selected subtitles;
and receiving the selected subtitle and using the selected subtitle in cooperation with the second video.
6. The method of claim 5, wherein the first subtitle is a selected subtitle of the first video; under the condition that the selected subtitle corresponding to the second video is determined according to the using times, the method further comprises the following steps:
and judging whether the use duration of the selected caption is greater than a preset value, if so, generating a second updating signal and uploading, and the server side responds to the second updating signal and cooperatively updates the use times of the selected caption.
7. The method of claim 6, further comprising:
generating and uploading a caption replacement signal, and responding to the caption replacement signal by the server to obtain a corresponding caption from the caption library, recording the caption as a self-selected caption and issuing the self-selected caption;
controlling the self-selected subtitles to be matched with the corresponding videos for use;
and judging whether the use duration of the self-selected subtitle is greater than the preset value, if so, forbidding to generate or upload the second updating signal after the corresponding video playing is finished, generating and uploading a second updating signal, and responding to the second updating signal by the server to update the use times of the self-selected subtitle in a matching manner.
8. A subtitle processing apparatus, the apparatus comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a first video, and subtitles carried by the first video are marked as first subtitles;
the extraction module is used for extracting the feature data of the first video and recording the feature data as the first feature data, wherein the feature data comprises the name and the duration of the video;
and the uploading module is used for uploading the first caption and the first characteristic data, wherein the first caption is stored in a caption library of a server and is set in association with the first characteristic data.
9. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to carry out the method of any one of claims 1 to 7 when the computer program is executed.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of any one of claims 1 to 7.
CN202110097666.3A 2021-01-25 2021-01-25 Subtitle processing method, subtitle processing device, electronic equipment and subtitle processing medium Pending CN112887806A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110097666.3A CN112887806A (en) 2021-01-25 2021-01-25 Subtitle processing method, subtitle processing device, electronic equipment and subtitle processing medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110097666.3A CN112887806A (en) 2021-01-25 2021-01-25 Subtitle processing method, subtitle processing device, electronic equipment and subtitle processing medium

Publications (1)

Publication Number Publication Date
CN112887806A true CN112887806A (en) 2021-06-01

Family

ID=76051096

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110097666.3A Pending CN112887806A (en) 2021-01-25 2021-01-25 Subtitle processing method, subtitle processing device, electronic equipment and subtitle processing medium

Country Status (1)

Country Link
CN (1) CN112887806A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114827717A (en) * 2022-04-12 2022-07-29 Oppo广东移动通信有限公司 Subtitle display method, device and equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090019009A1 (en) * 2007-07-12 2009-01-15 At&T Corp. SYSTEMS, METHODS AND COMPUTER PROGRAM PRODUCTS FOR SEARCHING WITHIN MOVIES (SWiM)
CN101616181A (en) * 2009-07-27 2009-12-30 腾讯科技(深圳)有限公司 A kind of method, system and equipment of uploading with the downloaded subtitle file
CN103067775A (en) * 2013-01-28 2013-04-24 Tcl集团股份有限公司 Subtitle display method for audio/video terminal, audio/video terminal and server
CN103179093A (en) * 2011-12-22 2013-06-26 腾讯科技(深圳)有限公司 Matching system and method for video subtitles
CN103309865A (en) * 2012-03-07 2013-09-18 腾讯科技(深圳)有限公司 Method and system for realizing video source clustering
CN104104986A (en) * 2014-07-29 2014-10-15 小米科技有限责任公司 Audio frequency and subtitle synchronizing method and device
CN110798635A (en) * 2019-10-16 2020-02-14 重庆爱奇艺智能科技有限公司 Method and device for matching subtitle files for video
CN111565338A (en) * 2020-05-29 2020-08-21 广州酷狗计算机科技有限公司 Method, device, system, equipment and storage medium for playing video

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090019009A1 (en) * 2007-07-12 2009-01-15 At&T Corp. SYSTEMS, METHODS AND COMPUTER PROGRAM PRODUCTS FOR SEARCHING WITHIN MOVIES (SWiM)
CN101616181A (en) * 2009-07-27 2009-12-30 腾讯科技(深圳)有限公司 A kind of method, system and equipment of uploading with the downloaded subtitle file
CN103179093A (en) * 2011-12-22 2013-06-26 腾讯科技(深圳)有限公司 Matching system and method for video subtitles
CN103309865A (en) * 2012-03-07 2013-09-18 腾讯科技(深圳)有限公司 Method and system for realizing video source clustering
CN103067775A (en) * 2013-01-28 2013-04-24 Tcl集团股份有限公司 Subtitle display method for audio/video terminal, audio/video terminal and server
CN104104986A (en) * 2014-07-29 2014-10-15 小米科技有限责任公司 Audio frequency and subtitle synchronizing method and device
CN110798635A (en) * 2019-10-16 2020-02-14 重庆爱奇艺智能科技有限公司 Method and device for matching subtitle files for video
CN111565338A (en) * 2020-05-29 2020-08-21 广州酷狗计算机科技有限公司 Method, device, system, equipment and storage medium for playing video

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114827717A (en) * 2022-04-12 2022-07-29 Oppo广东移动通信有限公司 Subtitle display method, device and equipment and storage medium

Similar Documents

Publication Publication Date Title
US10581947B2 (en) Video production system with DVE feature
CN107480236B (en) Information query method, device, equipment and medium
CN106331778B (en) Video recommendation method and device
EP2901631B1 (en) Enriching broadcast media related electronic messaging
US20150331942A1 (en) Methods, systems, and media for aggregating and presenting multiple videos of an event
CN111447505B (en) Video clipping method, network device, and computer-readable storage medium
CN102769781A (en) Method and device for recommending television program
CN106484774B (en) Correlation method and system for multi-source video metadata
WO2015061681A1 (en) Concepts for providing an enhanced media presentation
US10175863B2 (en) Video content providing scheme
CN104216956A (en) Method and device for searching picture information
CN108153882A (en) A kind of data processing method and device
US20230077534A1 (en) Content-modification system with probability-based selection feature
CN106534878A (en) Replaying method and system of live broadcast program, and server
CN113542909A (en) Video processing method and device, electronic equipment and computer storage medium
CN105045882A (en) Hot word processing method and device
CN112887806A (en) Subtitle processing method, subtitle processing device, electronic equipment and subtitle processing medium
US20180098099A1 (en) Real-time data updates from a run down system for a video broadcast
CN112911404A (en) Video subtitle processing method, apparatus, electronic device, and medium
CN112764988A (en) Data segmentation acquisition method and device
US20230092847A1 (en) Systems and Methods for Intelligent Media Content Segmentation and Analysis
WO2017019506A1 (en) News production system with dve template feature
KR20120071173A (en) System for providing additional service of vod content using sns message and method for providing additional service using the same
CN104834728A (en) Pushing method and device for subscribed video
CN113449143B (en) Video content retrieval method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210601

RJ01 Rejection of invention patent application after publication