Nothing Special   »   [go: up one dir, main page]

CN110213599A - A kind of method, equipment and the storage medium of additional information processing - Google Patents

A kind of method, equipment and the storage medium of additional information processing Download PDF

Info

Publication number
CN110213599A
CN110213599A CN201910304629.8A CN201910304629A CN110213599A CN 110213599 A CN110213599 A CN 110213599A CN 201910304629 A CN201910304629 A CN 201910304629A CN 110213599 A CN110213599 A CN 110213599A
Authority
CN
China
Prior art keywords
target
additional information
video frame
frame
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910304629.8A
Other languages
Chinese (zh)
Inventor
张鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910304629.8A priority Critical patent/CN110213599A/en
Publication of CN110213599A publication Critical patent/CN110213599A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • H04N21/2355Processing of additional data, e.g. scrambling of additional data or processing content descriptors involving reformatting operations of additional data, e.g. HTML pages
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • H04N21/4355Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream involving reformatting operations of additional data, e.g. HTML pages on a television screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47202End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting content on demand, e.g. video on demand
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Human Computer Interaction (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

This application discloses a kind of methods of additional information processing, comprising: obtains video flowing, the video flowing includes an at least frame video frame;It handles the video flowing and obtains an associated instructions of an at least target video frame and the corresponding target video frame;Target additional information corresponding with the associated instructions is searched according to the associated instructions, the target additional information is superimposed on the target video frame.The embodiment of the present application is directly superimposed relevant additional information when getting associated instructions, improves the acquisition speed of additional information.

Description

Method, device and storage medium for processing additional information
Technical Field
The present application relates to the field of video processing technologies, and in particular, to a method, a device, and a storage medium for processing additional information.
Background
Currently, advertisements are usually inserted before, during, or after the video program is played. The current video program needs to be paused when the advertisement is inserted during playing, and the playing is continued after the broadcasting is finished.
The advertisement insertion in the video playing is to completely turn off the picture of the video program and directly enter the advertisement picture, and at present, there is also an advertisement display mode that can query the related information in the video program without completely turning off the picture of the video program in the video program playing process. The advertisement display mode is that in the process of playing a video program, if a viewer is interested in an object in the video, the video can be paused, then an identification mode is called, at the moment, a prompt for selecting an interested area is given, the viewer selects the area, and a 'start identification' button and a 'cancel' button are arranged below a rectangular frame of the area. The audience clicks a 'start identification' button to trigger the uploading of the image of the selected area to an image identification server, the image identification server intelligently identifies and analyzes the content of the image of the selected area by utilizing a machine learning technology, and returns the names of the objects which are possibly contained, and correspondingly carries the interactive information of the possible objects, namely the advertisements related to the objects.
As can be seen from the above description, the current way of inserting advertisements in video programs requires pausing the video programs and also requires multiple interactive operations, resulting in inefficient information acquisition.
Disclosure of Invention
The embodiment of the application provides an additional information processing method, and when the associated instruction is acquired, the related additional information can be directly superimposed on the video frame, so that the acquisition efficiency of the additional information is improved. The embodiment of the application also provides corresponding equipment and a storage medium.
A first aspect of the present application provides an additional information processing method, including:
acquiring a video stream, wherein the video stream comprises at least one frame of video frame;
processing the video stream and acquiring at least one target video frame and an associated instruction corresponding to the target video frame;
and searching target additional information corresponding to the association instruction according to the association instruction, and superposing the target additional information on the target video frame.
A second aspect of the present application provides an additional information processing method, including:
the video processing device receives a video stream, wherein the video stream comprises at least one frame of video frame;
the video processing device identifies a target video frame in the video stream to determine target additional information associated with the target video frame, wherein the target video frame is included in the at least one video frame;
the video processing device configures association information associated with the target video frame for the target additional information.
A third aspect of the present application provides an additional information processing method, including:
the method comprises the steps that when video reporting equipment is used for reporting at least one frame of video frame of a video stream to be sent, the type of the video frame is determined;
when the video frame is a key frame, the video reporting equipment configures associated global consistency time information for the key frame;
the video reporting device sends the key frame and the global consistency time information, the global consistency time information is used for determining identification information, the identification information is used for associating a target video frame and target attachment information, and the target video frame is contained in the at least one frame of video frame.
A fourth aspect of the present application provides a terminal, comprising:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a video stream which comprises at least one frame of video frame;
the first processing module is used for processing the video stream and acquiring at least one target video frame and an associated instruction corresponding to the target video frame;
and the second processing module is used for searching target additional information corresponding to the association instruction according to the association instruction and superposing the target additional information on the target video frame.
With reference to the fourth aspect, in a first possible implementation manner,
the second processing module is configured to, when the association instruction is an instruction generated in response to a click operation on a target object, and the target object is an object in the content corresponding to the target video frame, highlight the target object, and superimpose the target additional information associated with the target object on the content corresponding to the target video frame.
With reference to the fourth aspect, in a second possible implementation manner,
the second processing module is configured to, when the association instruction is an instruction generated in response to a specific event in the content corresponding to the target video frame, superimpose the target additional information associated with the specific event at a position corresponding to the specific event in the content corresponding to the target video frame.
With reference to the fourth aspect, the first or second possible implementation manner of the fourth aspect, in a third possible implementation manner, the second processing module includes:
the first acquisition unit is used for acquiring the association information of the target video frame and the target additional information according to the association instruction;
a first search unit configured to search the target additional information corresponding to the target video frame from an information distribution apparatus according to the association information acquired by the first acquisition unit;
and the first processing unit is used for superposing the target additional information acquired by the first searching unit on the target video frame.
With reference to the fourth aspect, the first or second possible implementation manner of the fourth aspect, in a fourth possible implementation manner, the second processing module includes:
the second acquisition unit is used for acquiring the associated information of the target video frame and the target additional information;
a second searching unit, configured to search the target additional information corresponding to the target video frame from an information distribution device according to the association information acquired by the second acquiring unit;
and the second processing unit is used for superposing the target additional information acquired by the second searching unit on the target video frame according to the association instruction.
With reference to the third or fourth possible implementation manner of the fourth aspect, in a fifth possible implementation manner,
the obtaining module is further configured to obtain a timestamp of the target video frame as the associated information, where the timestamp of the target additional information is the same as the timestamp of the target video frame.
With reference to the third or fourth possible implementation manner of the fourth aspect, in a sixth possible implementation manner,
the acquiring module is further configured to determine identification information according to a timestamp of a latest key frame in the video stream before the target video frame, the timestamp of the target video frame, and global consistency time information associated with the key frame, and use the identification information as the association information.
With reference to the sixth possible implementation manner of the fourth aspect, in a seventh possible implementation manner, the global consistency time information is located in the key frame, or the global consistency time information is located in a supplemental enhancement information SEI frame, where the SEI is located after and immediately adjacent to the key frame.
A fifth aspect of the present application provides a video processing apparatus comprising:
a receiving module, configured to receive a video stream, where the video stream includes at least one frame of video frames;
a determining module, configured to identify a target video frame in the video stream to determine target additional information associated with the target video frame, where the target video frame is included in the at least one video frame received by the receiving module;
a configuration module, configured to configure association information associated with the target video frame for the target additional information determined by the determination module.
With reference to the fifth aspect, in a first possible implementation manner,
the configuration module is configured to configure the timestamp of the target video frame as the association information associated with the target additional information.
With reference to the fifth aspect, in a second possible implementation manner,
the configuration module is used for determining identification information according to the timestamp of the latest key frame before the target video frame in the video stream, the timestamp of the target video frame and the global consistency time information associated with the key frame; configuring the identification information as the association information associated with the target additional information.
With reference to the fifth aspect, the first possible implementation manner or the second possible implementation manner of the fifth aspect, in a third possible implementation manner, the video processing apparatus further includes:
a sending module, configured to send the association information and the target additional information to an information distribution device.
A sixth aspect of the present application provides a video reporting apparatus, including:
the determining module is used for determining the type of each frame of video frame in a video stream to be sent;
the configuration module is used for configuring associated global consistency time information for the key frame when the video frame is determined to be the key frame by the determination module;
a sending module, configured to send the key frame and global consistency time information configured by the configuration module, where the global consistency time information is used to determine identification information, the identification information is used to associate a target video frame with target attachment information, and the target video frame is included in the at least one video frame.
A seventh aspect of the present application provides a terminal, comprising a processor and a memory:
the memory is configured to store program instructions and the processor is configured to execute the program instructions to perform the method of additional information processing as described in the first aspect above.
An eighth aspect of the present application provides a video processing apparatus comprising a processor and a memory:
the memory is configured to store program instructions, and the processor is configured to execute the program instructions to perform the method of additional information processing as described in the second aspect above.
A ninth aspect of the present application provides a video reporting apparatus, where the video reporting apparatus includes a processor and a memory:
the memory is configured to store program instructions, and the processor is configured to execute the program instructions to perform the method of additional information processing as described in the third aspect above.
The terminal, the video processing device and the video reporting device can be computer devices.
A further aspect of the present application provides a computer-readable storage medium having stored therein instructions which, when run on a computer, cause the computer to perform the method of the first aspect described above.
A further aspect of the present application provides a computer-readable storage medium having stored therein instructions which, when run on a computer, cause the computer to perform the method of the second aspect described above.
Yet another aspect of the present application provides a computer-readable storage medium having stored therein instructions, which, when run on a computer, cause the computer to perform the method of the third aspect described above.
Yet another aspect of the present application provides a computer program product containing instructions which, when run on a computer, cause the computer to perform the method of the above-described aspects.
According to the embodiment of the application, the association instruction is acquired when the video stream is processed, and then the related additional information is directly superposed according to the association instruction, the association between the additional information and the video frame is established in advance, and video analysis is not needed, so that the acquisition efficiency of the additional information is improved.
Drawings
FIG. 1 is a schematic diagram of an example scenario of an additional information processing system in an embodiment of the present application;
FIG. 2A is a schematic diagram of a presentation interface in an embodiment of the present application;
FIG. 2B is a schematic diagram of a presentation interface in an embodiment of the present application;
FIG. 2C is a schematic diagram of a presentation interface in an embodiment of the present application;
FIG. 2D is a schematic diagram of a presentation interface in an embodiment of the present application;
FIG. 3 is a schematic view of another presentation interface in an embodiment of the present application;
FIG. 4 is a schematic diagram of an example of additional information in the embodiment of the present application;
FIG. 5 is a schematic diagram of an example of content corresponding to a video frame in an embodiment of the present application;
FIG. 6 is a schematic diagram of another exemplary scenario of an additional information processing system in an embodiment of the present application;
FIG. 7 is a schematic diagram illustrating an example of a scenario of a method for additional information processing in an embodiment of the present application;
fig. 8 is a schematic diagram of an embodiment of a terminal in the embodiment of the present application;
fig. 9 is a schematic diagram of another embodiment of the terminal in the embodiment of the present application;
fig. 10 is a schematic diagram of another embodiment of the terminal in the embodiment of the present application;
FIG. 11 is a schematic diagram of an embodiment of a video processing device in an embodiment of the present application;
fig. 12 is a schematic diagram of an embodiment of a video reporting apparatus in an embodiment of the present application;
fig. 13 is a schematic diagram of another embodiment of the terminal in the embodiment of the present application;
fig. 14 is a schematic diagram of an embodiment of a server in the embodiment of the present application.
Detailed Description
Embodiments of the present application will now be described with reference to the accompanying drawings, and it is to be understood that the described embodiments are merely illustrative of some, but not all, embodiments of the present application. As can be known to those skilled in the art, with the development of technology and the emergence of new scenarios, the technical solution provided in the embodiments of the present application is also applicable to similar technical problems.
The terms "first," "second," and the like in the description and in the claims of the present application and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiment of the application provides an additional information processing method, when the associated instruction is obtained, the related additional information is directly displayed in an overlapping mode, and the additional information displaying speed is improved. The embodiment of the application also provides corresponding equipment and a storage medium. The following are detailed below.
The scheme for processing the additional information provided by the embodiment of the application can be applied to live broadcast or on demand, and the content of the live broadcast or on demand can be a video program such as a movie or a television play and the like, and can also be a live broadcast program of a main broadcast chat room. The following describes applications of the embodiments of the present application in different scenarios with reference to fig. 1 and fig. 2A, respectively.
Fig. 1 is a schematic diagram of an example of a scenario of an additional information processing system in an embodiment of the present application.
As shown in fig. 1, the additional information processing system provided in the embodiment of the present application may include: the video distribution system comprises a video reporting device 10A, a video processing device 20, an information distribution device 30, a terminal 40A, a terminal 40B and a terminal 40C. Of course, the terminal 40A, the terminal 40B and the terminal 40C are only examples, and any terminal capable of playing video is suitable for the solution of the present application, and the number of terminals in fig. 1 should not be understood as a limitation to the number of terminals.
The video reporting device 10A in the scene fig. 1 may be a server for storing video content, such as: a server storing a television series, a movie, a variety program, a game program, or the like.
The video processing device 20 may be a computer device having a video content recognition function.
The information distribution device 30 may be a storage device, and may be implemented in a Content Delivery Network (CDN).
The video reporting device 10A transmits a video stream to the video processing device 20, where the video stream includes at least one frame of video frames. In practice a video stream is understood to be a sequence of video frames comprising a plurality of video frames. In one embodiment, the video frames include I frames, P frames, B frames, and other types of frames, where an I frame is a key frame and contains complete video information of one frame, a P frame is a forward reference frame, a P frame needs to refer to a corresponding I frame for decoding, a B frame is a bidirectional reference frame, and a B frame needs to refer to two frames, i.e., forward and backward, for decoding.
After receiving the video stream, the video processing device 20 decodes the video frames in the video stream, and when the video processing device 20 decodes the video frames, each frame may be decoded, and then image recognition is performed on each frame, and after the content of the image is recognized, additional information of the frame is determined, or it may not be necessary to perform recognition processing on each frame. Because of the high probability of content repetition of adjacent frames, it may be assumed that one frame is identified per second, for example: in terms of frame rate 30, that is, every 30 frames, a frame is identified to form an information stream, however, the sampling granularity of the information stream is only a parameter, and the present application does not distinguish, wherein an identified video frame may be referred to as an identified frame.
Video processing device 20 may configure corresponding additional information for each identified frame, and if each frame in the video stream is identified, each video frame is an identified frame, and if every 30 frames are identified as described in the example, every 30 frames are identified as identified frames, such as: in the frame sequence numbered 0 to 30, if the video frames numbered 0 and 30 are identified frames, the identified frames numbered 0 and 30 are configured with corresponding additional information, and the additional information of the video frames numbered 1 to 29 can be considered as the same as the additional information of the identified frame numbered 0.
The additional information configured for the identified frame is related to the content to which the identified frame corresponds. In an embodiment, taking the content corresponding to the identified frame as the content shown in fig. 2A as an example, the additional information may be names of the objects in fig. 2A, such as: backpack (backpack), person (person), truck (truck), car (car), and motorcycle (motorbike). Of course, the additional information here is only an example, and the additional information may be configured according to the service requirement, such as: information including backpack size, color, price, and other styles of backpacks of the same type may be configured for the backpack. Likewise, cars, motorcycles, clothing worn by a person, shoes, etc. may be configured with additional information similar to a backpack. The display position of the additional information is not limited to the manner shown in fig. 2A, and the display position of the additional information may be diversified, as shown in fig. 2B, may be displayed below the image in the form of a ticker, as shown in fig. 2C, may be displayed on the image in the form of a bullet screen, as shown in fig. 2D, and may be displayed on the right side of the image in the form of a list, of course, the display forms of the additional information of fig. 2A to 2D are all examples, and should not be construed as limiting the display form of the additional information. In another embodiment, the configured additional information is not limited to the advertisement-type information described above. Such as: in a sports event, as shown in fig. 3, taking football as an example, for a goal event, drinking information may be configured for the goal event, such as: pictures of the hands or applause sounds. Such as: in the game, both parties fight, and if there is an event of hitting the other party, a picture of a clapping palm or a cheering sound can be configured. Of course, these are examples, and the additional information may be other kinds of information.
After configuring the additional information for the identified video frame, the video processing device 20 may send the additional information to the information distribution device 30 for storage, although the scenario shown in fig. 1 is just one implementation, and actually, the additional information may also be stored on the video processing device 20. Next, description will be given taking an example in which the information is stored in the information distribution device 30.
When the terminal 40A, the terminal 40B, or the terminal 40C is watching a television program, the video stream may be acquired, and content on the video frame is rendered on a screen for display through operations such as decoding and rendering, and when a related instruction is acquired when the content of a certain video frame is displayed, the certain video frame is a target video frame in the embodiment of the present application, and the target video frame is included in the at least one video frame.
The generation mode of the association instruction may be various, and may be generated by a user operation, or may be generated by a terminal after identifying the content of the target video frame, and the following are introduced by different embodiments respectively:
in an embodiment, the association instruction may be selected by the user through a remote controller, finger touch, or a mouse, for example, on the screen content corresponding to the target video frame, as shown in fig. 2A, if the user selects many objects in fig. 2A, the selected objects are referred to as target objects. Such as: the backpack (backpack), the person (person), the truck (truck), the car (car), the motorcycle (motorbike), and the like may be highlighted for each item, and the highlighting may be highlighted in a frame-added manner as shown in fig. 2A, or may be highlighted in other manners, for example: highlighting, etc. Of course, fig. 2A illustrates that, it is possible that the user does not select such many objects at one time, and if the user selects only a backpack, only the backpack is highlighted, and then the target additional information related to the backpack may be superimposed on the screen of the highlighted backpack. For example: the content as shown in fig. 4 is superimposed. Of course, the content shown in fig. 4 is only an example, and a backpack similar to the backpack of this style may also be displayed in other styles, and the specifically displayed content may be customized according to the needs of the merchant, which is not limited in this embodiment of the application.
In another embodiment, the association instruction may be an instruction generated by the terminal 40A, the terminal 40B or the terminal 40C in response to a specific event in the content corresponding to the target video frame. As shown in fig. 5, when the terminal 40A, the terminal 40B, or the terminal 40C acquires a goal event when playing the content corresponding to the target video frame, the goal event triggers generation of a correlation instruction, and then the terminal 40A, the terminal 40B, or the terminal 40C acquires target additional information associated with the goal event from the information distribution device 30. As described in the above-mentioned video processing device 20 side, additional information of applause can be configured for the goal event, and then the terminal 40A, the terminal 40B or the terminal 40C acquires the additional information of applause and superimposes a presentation on the goal screen, and the result of the presentation can be understood by participating in fig. 3.
In the above scenario, it has been described that after the association instruction is obtained, the target additional information needs to be displayed in a superimposed manner, the terminal 40A, the terminal 40B, or the terminal 40C needs to obtain the target additional information from the information distribution device 30, and the time for obtaining the target additional information may be various, for example: after the terminal 40A, the terminal 40B, or the terminal 40C obtains the association instruction, the target additional information may be obtained from the information distribution device 30 according to the association information, and then the target additional information may be superimposed. Or after the target video frame is decoded, the target additional information may be acquired from the information distribution device 30 according to the association information, and then the association instruction may be acquired and directly displayed in an overlapping manner. The scheme of acquiring the target additional information after acquiring the association instruction can reduce the acquisition of unnecessary additional information and save network resources. Alternatively, the scheme of acquiring the additional information first may acquire the target additional information more quickly.
The scheme of acquiring the additional information first is realized because the terminal 40A, the terminal 40B or the terminal 40C has a time difference from the pulling to the latest video frame to the playing of the video frame, and the time difference is more than 2 seconds, which is enough for pulling and analyzing the additional information. Then, when the video frame is to be rendered on the screen, if the user interacts with the information of the video frame, the interaction is performed by matching with the additional information of the video frame.
The above describes that the target additional information is acquired from the information distribution apparatus 30 by the association information of the target video frame and the target additional information.
In some embodiments, the association information may be a timestamp, each video frame has a timestamp, and the timestamp is an Encoding Time Stamp (ETS) when the video reporting device encodes the video frame, and decoding the video frame by the stream processing device 20 decodes the ETS by the timestamp, and then adds the same timestamp to the additional information corresponding to the video frame according to the ETS. Thus, the terminal 40A, the terminal 40B, or the terminal 40C can acquire the corresponding target additional information from the information distribution device 30 through the ETS of the target video frame.
In some embodiments, the association information may be identification information, because the content involved in the encoding and decoding process is complicated at terminals with different resolutions, for example: video frames decoded by terminal devices with different definition requirements such as ultra-definition, high-definition, and standard may all be different, and when the definition requirement is low, many video frames may be discarded, so the video processing device 20 may configure identification information for the additional information, and because the relative time difference between two video frames does not change due to an unexpected situation, the identification information may be determined according to the relative time difference between the two video frames and global consistency time information.
In one embodiment, in determining the identification information, the video reporting apparatus is required to insert the global coherence time information Tcur in the key frame of the video stream. In another embodiment, a Supplemental Enhancement Information (SEI) frame, in which the global coherence time information Tcur is inserted, is inserted at the position of the frame immediately next to the key frame. In this way, when configuring the identification information for the additional information, the video processing apparatus 20 may determine that the identification information of the additional information is: ID Tcur + ETS-ETS 0. Thus, the identification of the additional information acquired when the terminal 40A, the terminal 40B or the terminal 40C plays back to the current video frame may be ID Tcur + ETS-ETS 0. If playing to other video frames, the identification information of the other video frames may be represented as ID '═ Tcur' + ETS '-ETS 0'.
The SEI frames are also kept from being dropped during compression/decompression or video synchronization operations.
By the identification information, the corresponding additional information can be accurately found from the information distribution device 30, and time difference caused by a complex scene in the decoding process can be avoided.
Fig. 1 is a scenario in which a server providing video content serves as a video reporting device, and in an anchor scenario, a terminal that may be an anchor serves as a video reporting device 10B. As shown in fig. 6, the video stream is uploaded to the video processing device 20 by the terminal of the anchor, and in this scenario, the video reporting device 10B also uploads the video stream to the video storage device 50, and when the terminal 40A, the terminal 40B, or the terminal 40C plays the video, the video storage device 50 provides the video stream for the terminal 40A, the terminal 40B, or the terminal 40C. Other processes in the scene are the same as those described in fig. 1 to fig. 5, and may participate in understanding of the above contents, which is not repeated herein.
The foregoing embodiments describe application scenarios of the embodiments of the present application, and the following describes a method for processing additional information provided by the embodiments of the present application with reference to the drawings.
Fig. 7 is a schematic diagram of an embodiment of a method for processing additional information in the embodiment of the present application.
As shown in fig. 7, an embodiment of the method for processing additional information provided in the embodiment of the present application includes:
701. and the video reporting equipment sends the video stream to the stream processing equipment.
The video stream includes at least one frame of video frames.
702. After receiving a video stream, a stream processing device identifies a target video frame in the video stream to determine target additional information associated with the target video frame.
The target video frame is included in the at least one frame of video frame.
703. The video processing device configures the association information associated with the target video frame for the target additional information.
The association information may be configured in the target additional information, or may not be configured in the target additional information, and the association between the target additional information and the association information may be established in other manners.
The association information may be a time stamp or identification information.
If yes, the video processing device configures the timestamp of the target video frame as the associated information associated with the target additional information.
If the identification information is present, then:
the video processing device determines identification information according to the timestamp of the latest key frame before the target video frame in the video stream, the timestamp of the target video frame and the global consistency time information associated with the key frame;
the video processing apparatus configures the identification information as the association information associated with the target additional information.
704. The video processing apparatus transmits the association information and the target additional information to the information distribution apparatus.
705. The information distribution device stores the association information and the target additional information.
706. And the terminal receives the video stream from the video reporting device or the video storage device.
707. And the terminal processes the video stream and acquires at least one target video frame and an associated instruction corresponding to the target video frame.
The step may be that the terminal acquires the association instruction when playing the target video frame.
The association instruction may be an instruction generated by the terminal in response to a click operation on a target object, where the target object is an object in the content corresponding to the target video frame.
The association instruction may be an instruction generated by the terminal in response to a specific event in the content corresponding to the target video frame.
708. The terminal acquires the target additional information.
Step 708 may precede step 707 or follow step 707.
If step 708 precedes step 707, it may be:
the terminal acquires the associated information of the target video frame and the target additional information;
and acquiring the target additional information corresponding to the target video frame from information distribution equipment according to the associated information.
If step 708 follows step 707, it may be:
the terminal acquires the association information of the target video frame and the target additional information according to the association instruction;
and acquiring the target additional information corresponding to the target video frame from information distribution equipment according to the associated information.
And if the associated information is a timestamp, the terminal acquires the timestamp of the target video frame as the associated information, and the timestamp of the target additional information is the same as the timestamp of the target video frame.
If the associated information is identification information, the identification information is determined according to the timestamp of the latest key frame before the target video frame in the video stream, the timestamp of the target video frame and the global consistency time information associated with the key frame, and the identification information is used as the associated information.
709. And the terminal displays the target additional information on the content corresponding to the target video frame in an overlapping manner.
This step 709 may include:
highlighting the target object, and overlaying the target additional information associated with the target object on the content corresponding to the target video frame. Or,
and overlaying the target additional information associated with the specific event at the position corresponding to the specific event in the content corresponding to the target video frame.
In this step, the target object may be highlighted, and the target additional information associated with the target object may be displayed in an overlapping manner on the content corresponding to the target video frame.
The description of the embodiment corresponding to fig. 7 may also participate in the understanding of the descriptions corresponding to fig. 1 to fig. 6, and will not be repeated herein.
The following describes a terminal, a video processing device, and a video reporting device provided in an embodiment of the present application with reference to the drawings.
Fig. 8 is a schematic diagram of an embodiment of a terminal 80 according to the present application.
As shown in fig. 8, the terminal 80 provided in the embodiment of the present application may include:
an obtaining module 801, configured to obtain a video stream, where the video stream includes at least one frame of video frame;
a first processing module 802, configured to process the video stream and obtain at least one target video frame and an associated instruction corresponding to the target video frame;
the second processing module 803 is configured to search for target additional information corresponding to the association instruction according to the association instruction obtained by the first processing module 802, and superimpose the target additional information on the target video frame.
Optionally, the second processing module 803 is configured to, when the association instruction is an instruction generated in response to a click operation on a target object, where the target object is an object in the content corresponding to the target video frame, highlight the target object, and superimpose the target additional information associated with the target object on the content corresponding to the target video frame.
Optionally, the second processing module 803 is configured to, when the association instruction is an instruction generated in response to a specific event in the content corresponding to the target video frame, superimpose the target additional information associated with the specific event at a position corresponding to the specific event in the content corresponding to the target video frame.
Optionally, as shown in fig. 9, the second processing module 803 includes:
a first obtaining unit 8031, configured to obtain, according to the association instruction, association information between the target video frame and target additional information;
a first search unit 8032, configured to search the target additional information corresponding to the target video frame from an information distribution apparatus according to the association information acquired by the first acquisition unit 8031;
a first processing unit 8033, configured to superimpose the target additional information acquired by the first search unit 8032 on the target video frame.
Optionally, as shown in fig. 10, the second processing module 803 includes:
a second obtaining unit 8034, configured to obtain associated information between the target video frame and the target additional information;
a second searching unit 8035, configured to search the target additional information corresponding to the target video frame from the information distribution apparatus according to the association information acquired by the second acquiring unit 8034;
the second processing unit 8036 is configured to, according to the association instruction, superimpose the target additional information acquired by the second search unit 8035 on the target video frame.
Optionally, the obtaining module 801 is further configured to obtain a timestamp of the target video frame as the association information, where the timestamp of the target additional information is the same as the timestamp of the target video frame.
Optionally, the obtaining module 801 is further configured to determine identification information according to a timestamp of a latest key frame in the video stream before the target video frame, the timestamp of the target video frame, and global consistency time information associated with the key frame, and use the identification information as the association information.
The global consistency time information is located in the key frame or the global consistency time information is located in a supplemental enhancement information, SEI, frame, the SEI following and immediately adjacent to the key frame.
Fig. 11 is a schematic diagram of an embodiment of a video processing device 90 according to an embodiment of the present application.
As shown in fig. 11, an embodiment of a video processing apparatus 90 provided in the embodiment of the present application includes:
a receiving module 901, configured to receive a video stream, where the video stream includes at least one frame of video frame;
a determining module 902, configured to identify a target video frame in the video stream to determine target additional information associated with the target video frame, where the target video frame is included in the at least one video frame received by the receiving module 901;
a configuring module 903, configured to configure association information associated with the target video frame for the target additional information determined by the determining module 902.
Optionally, the configuring module 903 is configured to configure the timestamp of the target video frame as the association information associated with the target additional information.
Optionally, the configuring module 903 is configured to determine the identification information according to a timestamp of a latest key frame in the video stream before the target video frame, a timestamp of the target video frame, and global consistency time information associated with the key frame; configuring the identification information as the association information associated with the target additional information.
Optionally, the video processing device 90 further includes:
a sending module 904, configured to send the association information and the target additional information to an information distribution device.
Fig. 12 is a schematic diagram of an embodiment of a video reporting apparatus 100 according to an embodiment of the present disclosure.
The video reporting apparatus 100 provided in the embodiment of the present application may include:
a determining module 1001, configured to determine a type of each frame of a video stream to be sent;
a configuration module 1002, configured to configure associated global coherence time information for a key frame when the determination module 1001 determines that a video frame is the key frame;
a sending module 1003, configured to send the key frame and the global consistency time information configured by the configuration module 1002, where the global consistency time information is used to determine identification information, the identification information is used to associate a target video frame and target accessory information, and the target video frame is included in the at least one frame of video frame.
As shown in fig. 13, for convenience of description, only the portions related to the embodiments of the present application are shown, and details of the specific technology are not disclosed, please refer to the method portion of the embodiments of the present application. The terminal may be any terminal device including a mobile phone, a tablet computer, a Personal Digital Assistant (PDA, abbreviated as "Personal Digital Assistant"), a Sales terminal (POS, abbreviated as "Point of Sales"), a vehicle-mounted computer, etc., and the terminal is taken as a mobile phone as an example:
fig. 13 is a block diagram illustrating a partial structure of a mobile phone related to a terminal provided in an embodiment of the present application. Referring to fig. 13, the handset includes: radio Frequency (RF) circuit 1110, memory 1120, input unit 1130, display unit 1140, sensor 1150, audio circuit 1160, wireless fidelity (WiFi) module 1170, processor 1180, and power supply 1190. Those skilled in the art will appreciate that the handset configuration shown in fig. 13 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile phone in detail with reference to fig. 13:
RF circuit 1110 may be used for receiving and transmitting signals during a message transmission or call, and in particular, for receiving downlink messages from a base station and then processing the received downlink messages to processor 1180; in addition, the data for designing uplink is transmitted to the base station. In general, RF circuit 1110 includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a low noise Amplifier (Lownoise Amplifier; LNA; Lownoise Amplifier; LNA), a duplexer, and the like. In addition, the RF circuitry 1110 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), e-mail, Short Message Service (SMS), and so on.
The memory 1120 may be used to store software programs and modules, and the processor 1180 may execute various functional applications and data processing of the mobile phone by operating the software programs and modules stored in the memory 1120. The memory 1120 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 1120 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 1130 may be used to receive a selection operation for a target object in a video input by a user and generate a key signal input related to user setting and function control of the cellular phone. Specifically, the input unit 1130 may include a touch panel 1131 and other input devices 1132. Touch panel 1131, also referred to as a touch screen, can collect touch operations of a user on or near the touch panel 1131 (for example, operations of the user on or near touch panel 1131 by using any suitable object or accessory such as a finger or a stylus pen), and drive corresponding connection devices according to a preset program. Alternatively, the touch panel 1131 may include two parts, namely, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 1180, and can receive and execute commands sent by the processor 1180. In addition, the touch panel 1131 can be implemented by using various types, such as resistive, capacitive, infrared, and surface acoustic wave. The input unit 1130 may include other input devices 1132 in addition to the touch panel 1131. In particular, other input devices 1132 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 1140 may be used to display video content corresponding to the video frames. The Display unit 1140 may include a Display panel 1141, and optionally, the Display panel 1141 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 1131 can cover the display panel 1141, and when the touch panel 1131 detects a touch operation on or near the touch panel, the touch panel is transmitted to the processor 1180 to determine the type of the touch event, and then the processor 1180 provides a corresponding visual output on the display panel 1141 according to the type of the touch event. Although in fig. 13, the touch panel 1131 and the display panel 1141 are two independent components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 1131 and the display panel 1141 may be integrated to implement the input and output functions of the mobile phone.
The handset may also include at least one sensor 1150, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 1141 according to the brightness of ambient light, and the proximity sensor may turn off the display panel 1141 and/or the backlight when the mobile phone moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
Audio circuitry 1160, speakers 1161, and microphone 1162 may provide an audio interface between a user and a cell phone. The audio circuit 1160 may transmit the electrical signal converted from the received audio data to the speaker 1161, and convert the electrical signal into a sound signal for output by the speaker 1161; on the other hand, the microphone 1162 converts the collected sound signals into electrical signals, which are received by the audio circuit 1160 and converted into audio data, which are then processed by the audio data output processor 1180, and then transmitted to, for example, another cellular phone via the RF circuit 1110, or output to the memory 1120 for further processing.
WiFi belongs to short-distance wireless transmission technology, and the cell phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 1170, and provides wireless broadband internet access for the user. Although fig. 13 shows the WiFi module 1170, it is understood that it does not belong to the essential constitution of the handset, and can be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 1180 is a control center of the mobile phone, and is connected to various parts of the whole mobile phone through various interfaces and lines, and executes various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 1120 and calling data stored in the memory 1120, thereby performing overall monitoring of the mobile phone. Optionally, processor 1180 may include one or more processing units; preferably, the processor 1180 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated within processor 1180.
The phone also includes a power supply 1190 (e.g., a battery) for powering the various components, and preferably, the power supply may be logically connected to the processor 1180 via a power management system, so that the power management system may manage charging, discharging, and power consumption management functions.
Although not shown, the mobile phone may further include a camera, a bluetooth module, etc., which are not described herein.
In the embodiment of the present application, the terminal includes a processor 1180, a display unit 1140, and the like, which may perform the corresponding steps of the terminal side in fig. 1 to 7.
Fig. 14 is a schematic structural diagram of a server 1200, which may have a relatively large difference due to different configurations or performances, and may include one or more Central Processing Units (CPUs) 1222 (e.g., one or more processors) and a memory 1232, and one or more storage media 1230 (e.g., one or more mass storage devices) storing an application program 1242 or data 1244. Memory 1232 and storage media 1230 can be, among other things, transient storage or persistent storage. The program stored in the storage medium 1230 may include one or more modules (not shown), each of which may include a series of instruction operations for the server. Still further, the central processor 1222 may be configured to communicate with the storage medium 1230, to execute a series of instruction operations in the storage medium 1230 on the server 1200.
The server 1200 may also include one or more power supplies 1226, one or more wired or wireless network interfaces 1250, one or more input-output interfaces 1258, and/or one or more operating systems 1241, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
The steps performed by the server in the above embodiment may be based on the server structure shown in fig. 14.
The CPU 1222 is configured to execute corresponding steps executed by the video processing device or the video reporting device in the embodiments corresponding to fig. 1 to fig. 7.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that a computer can store or a data storage device, such as a server, a data center, etc., that is integrated with one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
One of ordinary skill in the art will appreciate that all or part of the steps of the methods of the above embodiments may be implemented by instructions associated with hardware via a program, which may be stored in a computer-readable storage medium, and the storage medium may include: ROM, RAM, magnetic or optical disks, and the like.
The method, the device and the storage medium for additional information processing provided by the embodiments of the present application are described in detail above, and a specific example is applied in the present application to explain the principle and the implementation of the present application, and the description of the above embodiments is only used to help understand the method and the core idea of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (15)

1. A method of additional information processing, comprising:
acquiring a video stream, wherein the video stream comprises at least one frame of video frame;
processing the video stream and acquiring at least one target video frame and an associated instruction corresponding to the target video frame;
and searching target additional information corresponding to the association instruction according to the association instruction, and superposing the target additional information on the target video frame.
2. The method according to claim 1, wherein the association instruction is an instruction generated in response to a click operation on a target object, the target object being an object in the content corresponding to the target video frame, and wherein superimposing the target additional information on the content corresponding to the target video frame comprises:
highlighting the target object, and overlaying the target additional information associated with the target object on the content corresponding to the target video frame.
3. The method according to claim 1, wherein the association instruction is an instruction generated in response to a specific event in the content corresponding to the target video frame, and wherein the superimposing the target additional information on the content corresponding to the target video frame comprises:
and overlaying the target additional information associated with the specific event at the position corresponding to the specific event in the content corresponding to the target video frame.
4. The method according to any one of claims 1 to 3, wherein the searching for the target additional information corresponding to the association instruction according to the association instruction, and superimposing the target additional information on the target video frame comprises:
acquiring the association information of the target video frame and the target additional information according to the association instruction;
searching the target additional information corresponding to the target video frame from information distribution equipment according to the associated information;
superimposing the target additional information on the target video frame.
5. The method according to any one of claims 1 to 3, wherein the searching for the target additional information corresponding to the association instruction according to the association instruction, and superimposing the target additional information on the target video frame comprises:
acquiring the associated information of the target video frame and the target additional information;
searching the target additional information corresponding to the target video frame from information distribution equipment according to the associated information;
and according to the association instruction, the target additional information is superposed on the target video frame.
6. The method according to claim 4 or 5, characterized in that the method further comprises:
and acquiring a timestamp of the target video frame as the associated information, wherein the timestamp of the target additional information is the same as the timestamp of the target video frame.
7. The method according to claim 4 or 5, characterized in that the method further comprises:
and determining identification information according to the timestamp of the latest key frame before the target video frame in the video stream, the timestamp of the target video frame and the global consistency time information associated with the key frame, and taking the identification information as the associated information.
8. The method of claim 7, wherein the global consistency time information is located in the key frame, or wherein the global consistency time information is located in a Supplemental Enhancement Information (SEI) frame, wherein the SEI is located after and immediately adjacent to the key frame.
9. A method of additional information processing, comprising:
the video processing device receives a video stream, wherein the video stream comprises at least one frame of video frame;
the video processing device identifies a target video frame in the video stream to determine target additional information associated with the target video frame, wherein the target video frame is included in the at least one video frame;
the video processing device configures association information associated with the target video frame for the target additional information.
10. The method of claim 9, wherein the video processing device configures association information associated with the target video frame for the target additional information, and wherein the configuring comprises:
the video processing device configures a timestamp of the target video frame as the association information associated with the target additional information.
11. The method of claim 9, wherein the video processing device configures association information associated with the target video frame for the target additional information, and wherein the configuring comprises:
the video processing device determines identification information according to a timestamp of a latest key frame before the target video frame in the video stream, a timestamp of the target video frame and global consistency time information associated with the key frame;
the video processing apparatus configures the identification information as the association information associated with the target additional information.
12. A method of additional information processing, comprising:
the video reporting equipment determines the type of each frame of video frame in a video stream to be sent;
when the video frame is determined to be a key frame, the video reporting equipment configures associated global consistency time information for the key frame;
the video reporting device sends the key frame and the global consistency time information, the global consistency time information is used for determining identification information, the identification information is used for associating a target video frame and target attachment information, and the target video frame is contained in the at least one frame of video frame.
13. A terminal, comprising:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a video stream which comprises at least one frame of video frame;
the first processing module is used for processing the video stream and acquiring at least one target video frame and an associated instruction corresponding to the target video frame;
and the second processing module is used for searching target additional information corresponding to the association instruction according to the association instruction and superposing the target additional information on the target video frame.
14. A computer device, the computer device comprising a processor and a memory:
the memory is adapted to store program instructions, and the processor is adapted to execute the program instructions to perform a method of additional information processing according to any of the preceding claims 1-8, or to perform a method of additional information processing according to any of the preceding claims 9-11, or to perform a method of additional information processing according to claim 12.
15. A computer-readable storage medium comprising instructions which, when executed on a computer, cause the computer to perform the method of additional information processing according to any one of claims 1 to 8, or the method of additional information processing according to any one of claims 9 to 11, or the method of additional information processing according to claim 12.
CN201910304629.8A 2019-04-16 2019-04-16 A kind of method, equipment and the storage medium of additional information processing Pending CN110213599A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910304629.8A CN110213599A (en) 2019-04-16 2019-04-16 A kind of method, equipment and the storage medium of additional information processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910304629.8A CN110213599A (en) 2019-04-16 2019-04-16 A kind of method, equipment and the storage medium of additional information processing

Publications (1)

Publication Number Publication Date
CN110213599A true CN110213599A (en) 2019-09-06

Family

ID=67786017

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910304629.8A Pending CN110213599A (en) 2019-04-16 2019-04-16 A kind of method, equipment and the storage medium of additional information processing

Country Status (1)

Country Link
CN (1) CN110213599A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111314759A (en) * 2020-03-02 2020-06-19 腾讯科技(深圳)有限公司 Video processing method and device, electronic equipment and storage medium
CN112887653A (en) * 2021-01-25 2021-06-01 联想(北京)有限公司 Information processing method and information processing device
CN112995759A (en) * 2019-12-13 2021-06-18 腾讯科技(北京)有限公司 Interactive service processing method, system, device, equipment and storage medium
CN113596493A (en) * 2021-07-26 2021-11-02 腾讯科技(深圳)有限公司 Interactive special effect synchronization method and related device
CN113645486A (en) * 2021-07-16 2021-11-12 北京爱笔科技有限公司 Video data processing method and device, computer equipment and storage medium
CN114584830A (en) * 2020-12-02 2022-06-03 青岛海尔多媒体有限公司 Method and device for processing video and household appliance
CN114630138A (en) * 2022-03-14 2022-06-14 上海哔哩哔哩科技有限公司 Configuration information issuing method and system
CN114911550A (en) * 2021-02-09 2022-08-16 阿里巴巴集团控股有限公司 Cloud application interface processing system, method, device and equipment
CN115278292A (en) * 2022-06-30 2022-11-01 北京爱奇艺科技有限公司 Video reasoning information display method and device and electronic equipment
WO2023071861A1 (en) * 2021-10-29 2023-05-04 影石创新科技股份有限公司 Data visualization display method and apparatus, computer device, and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105373938A (en) * 2014-08-27 2016-03-02 阿里巴巴集团控股有限公司 Method for identifying commodity in video image and displaying information, device and system
CN106202282A (en) * 2016-07-01 2016-12-07 刘青山 Multi-media network shopping guidance system
CN106303621A (en) * 2015-06-01 2017-01-04 北京中投视讯文化传媒股份有限公司 The insertion method of a kind of video ads and device
CN107995155A (en) * 2017-10-11 2018-05-04 上海聚力传媒技术有限公司 Video data encoding, decoding, methods of exhibiting, video system and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105373938A (en) * 2014-08-27 2016-03-02 阿里巴巴集团控股有限公司 Method for identifying commodity in video image and displaying information, device and system
CN106303621A (en) * 2015-06-01 2017-01-04 北京中投视讯文化传媒股份有限公司 The insertion method of a kind of video ads and device
CN106202282A (en) * 2016-07-01 2016-12-07 刘青山 Multi-media network shopping guidance system
CN107995155A (en) * 2017-10-11 2018-05-04 上海聚力传媒技术有限公司 Video data encoding, decoding, methods of exhibiting, video system and storage medium

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112995759A (en) * 2019-12-13 2021-06-18 腾讯科技(北京)有限公司 Interactive service processing method, system, device, equipment and storage medium
US11736749B2 (en) 2019-12-13 2023-08-22 Tencent Technology (Shenzhen) Company Limited Interactive service processing method and system, device, and storage medium
CN111314759B (en) * 2020-03-02 2021-08-10 腾讯科技(深圳)有限公司 Video processing method and device, electronic equipment and storage medium
CN111314759A (en) * 2020-03-02 2020-06-19 腾讯科技(深圳)有限公司 Video processing method and device, electronic equipment and storage medium
CN114584830A (en) * 2020-12-02 2022-06-03 青岛海尔多媒体有限公司 Method and device for processing video and household appliance
CN112887653B (en) * 2021-01-25 2022-10-21 联想(北京)有限公司 Information processing method and information processing device
CN112887653A (en) * 2021-01-25 2021-06-01 联想(北京)有限公司 Information processing method and information processing device
CN114911550A (en) * 2021-02-09 2022-08-16 阿里巴巴集团控股有限公司 Cloud application interface processing system, method, device and equipment
CN113645486A (en) * 2021-07-16 2021-11-12 北京爱笔科技有限公司 Video data processing method and device, computer equipment and storage medium
CN113596493A (en) * 2021-07-26 2021-11-02 腾讯科技(深圳)有限公司 Interactive special effect synchronization method and related device
CN113596493B (en) * 2021-07-26 2023-03-10 腾讯科技(深圳)有限公司 Interactive special effect synchronization method and related device
WO2023071861A1 (en) * 2021-10-29 2023-05-04 影石创新科技股份有限公司 Data visualization display method and apparatus, computer device, and storage medium
CN114630138A (en) * 2022-03-14 2022-06-14 上海哔哩哔哩科技有限公司 Configuration information issuing method and system
CN114630138B (en) * 2022-03-14 2023-12-08 上海哔哩哔哩科技有限公司 Configuration information issuing method and system
CN115278292A (en) * 2022-06-30 2022-11-01 北京爱奇艺科技有限公司 Video reasoning information display method and device and electronic equipment
CN115278292B (en) * 2022-06-30 2023-12-05 北京爱奇艺科技有限公司 Video reasoning information display method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN110213599A (en) A kind of method, equipment and the storage medium of additional information processing
CN109819179B (en) Video editing method and device
US9924205B2 (en) Video remote-commentary synchronization method and system, and terminal device
CN107396137B (en) Online interaction method, device and system
CN106210754B (en) Method, server, mobile terminal, system and storage medium for controlling live video
WO2018192415A1 (en) Data live broadcast method, and related device and system
CN106803993B (en) Method and device for realizing video branch selection playing
JP6430656B6 (en) System, method and apparatus for displaying content items
EP2961172A1 (en) Method and device for information acquisition
US9128593B2 (en) Enabling an interactive program associated with a live broadcast on a mobile device
CN111491197B (en) Live content display method and device and storage medium
CN107333162B (en) Method and device for playing live video
WO2014205761A1 (en) Data presentation method, terminal and system
CN104883358A (en) Interaction method and device based on recommended content
CN113141524B (en) Resource transmission method, device, terminal and storage medium
CN106303733B (en) Method and device for playing live special effect information
CN110662090B (en) Video processing method and system
CN111182335B (en) Streaming media processing method, device, equipment and computer readable storage medium
CN110248245B (en) Video positioning method and device, mobile terminal and storage medium
CN108900855B (en) Live content recording method and device, computer readable storage medium and server
CN110087149A (en) A kind of video image sharing method, device and mobile terminal
CN112995759A (en) Interactive service processing method, system, device, equipment and storage medium
CN107908765B (en) Game resource processing method, mobile terminal and server
CN108965977B (en) Method, device, storage medium, terminal and system for displaying live gift
CN109766505B (en) Information resource pushing method, system, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190906