CN113938712B - Video playing method and device and electronic equipment - Google Patents
Video playing method and device and electronic equipment Download PDFInfo
- Publication number
- CN113938712B CN113938712B CN202111194498.6A CN202111194498A CN113938712B CN 113938712 B CN113938712 B CN 113938712B CN 202111194498 A CN202111194498 A CN 202111194498A CN 113938712 B CN113938712 B CN 113938712B
- Authority
- CN
- China
- Prior art keywords
- story line
- actor
- playing
- information
- vector
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 239000012634 fragment Substances 0.000 claims abstract description 9
- 238000012216 screening Methods 0.000 claims abstract description 8
- 239000013598 vector Substances 0.000 claims description 79
- 230000011218 segmentation Effects 0.000 claims description 29
- 238000004891 communication Methods 0.000 claims description 18
- 238000004364 calculation method Methods 0.000 claims description 17
- 238000004590 computer program Methods 0.000 claims description 10
- 238000006243 chemical reaction Methods 0.000 claims description 8
- 238000012545 processing Methods 0.000 claims description 8
- 230000009466 transformation Effects 0.000 claims description 4
- 238000009432 framing Methods 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 abstract description 4
- 230000006870 function Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000003491 array Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23418—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23424—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/235—Processing of additional data, e.g. scrambling of additional data or processing content descriptors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/251—Learning process for intelligent management, e.g. learning user preferences for recommending movies
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/258—Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
- H04N21/25866—Management of end-user data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/262—Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
- H04N21/26258—Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists for generating a list of items to be played back in a given order, e.g. playlist, or scheduling item distribution according to such list
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/435—Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44016—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/4508—Management of client data or end-user data
- H04N21/4532—Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/466—Learning process for intelligent management, e.g. learning user preferences for recommending movies
- H04N21/4668—Learning process for intelligent management, e.g. learning user preferences for recommending movies for recommending content, e.g. movies
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8549—Creating video summaries, e.g. movie trailer
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Business, Economics & Management (AREA)
- Marketing (AREA)
- Computer Security & Cryptography (AREA)
- Computer Graphics (AREA)
- Computing Systems (AREA)
- Television Signal Processing For Recording (AREA)
- Processing Or Creating Images (AREA)
Abstract
The application relates to a data matching technology, and discloses a video playing method, a device and electronic equipment, comprising the following steps: acquiring a story line set and an actor information list of a film, wherein each story line in the story line set comprises a plurality of film and television fragments; combining all the video clips in each story line according to the time sequence to obtain a story line clip corresponding to each story line; calculating the weight of each actor in the actor information list in each story line segment to obtain a corresponding actor segment weight set; when a user video watching request is received, calculating the playing weight of each story line segment according to the user video watching request and the actor segment weight value set; and screening all the story line segments according to the playing weight, and playing the screened story line segments. The application can improve the video playing efficiency.
Description
Technical Field
The present application relates to the field of data matching technologies, and in particular, to a video playing method, a video playing device, and an electronic device.
Background
In the busy society, many people only have fragmented time to watch the video and do not have time to watch the complete video, so most people want to watch the video in a fast playing way in a fragmented time in a fast viewing way.
However, the current video playing mode controls the video watching speed only by doubling the speed, so that the video can be quickly played and watched, favorite episodes cannot be found in time, continuous doubling of the speed is needed, and the video playing efficiency is low.
Disclosure of Invention
In order to solve the above technical problems or at least partially solve the above technical problems, the present application provides a video playing method and apparatus, which can improve the efficiency of fast viewing.
In a first aspect, the present application provides a method for fast viewing, including:
acquiring a story line set and an actor information list of a film, wherein each story line in the story line set comprises a plurality of film and television fragments;
combining all the video clips in each story line according to the time sequence to obtain a story line clip corresponding to each story line;
calculating the weight of each actor in the actor information list in each story line segment to obtain a corresponding actor segment weight set;
when a user video watching request is received, calculating the playing weight of each story line segment according to the user video watching request and the actor segment weight value set;
and screening all the story line segments according to the playing weight, and playing the screened story line segments.
Optionally, the calculating the weight of each actor in the actor information list for each actor going out in the story line segment to obtain a corresponding actor segment weight value set includes:
selecting one of the story line segments, framing the selected story line segment to obtain a plurality of video frames, and selecting actor image information of one actor from the actor information list;
extracting face information in the actor image information;
counting the total number of video frames in the selected story line segments;
counting the number of video frames containing the face information in the selected story line segment to obtain the number of target video frames;
calculating an actor segment weight value of the selected actor in the selected story line segment according to the total number of the video frames and the target video frame number corresponding to the selected actor;
and summarizing all actor segment weight values corresponding to each story line segment to obtain all actor segment weight value sets in each story line segment.
Optionally, the calculating, according to the user viewing request and the actor segment weight value set, a playing weight of each of the story line segments includes:
extracting user information in the user film watching request;
inquiring user history viewing information corresponding to the user information in a preset history viewing information base;
vector conversion is carried out on the user history film watching information to obtain a user film watching vector;
acquiring an actor information text of each actor in the actor information list, and carrying out vector transformation on the actor information text to obtain an actor feature vector;
calculating the association degree of the user viewing vector and the actor feature vector to obtain actor association degree;
and carrying out weighted calculation according to the actor association degree and each actor segment weight value set to obtain the playing weight of the corresponding story line segment.
Optionally, the performing vector conversion on the user history viewing information to obtain a user viewing vector includes:
word segmentation processing is carried out on the user history film watching information to obtain a text word segmentation set;
combining each word in the text word segmentation set according to the sequence in the user history film to obtain a text word segmentation sequence;
converting each word in the text word segmentation sequence into a vector to obtain a text word vector;
and combining all the text word vectors according to the sequence of the corresponding words in the text word segmentation sequence to obtain a user viewing vector.
Optionally, the combining all the text word vectors according to the sequence of the corresponding words in the text word segmentation sequence to obtain the user viewing vector includes:
carrying out arithmetic average calculation on all elements in the text word vector to obtain a vector characteristic value;
and combining all the vector characteristic values according to the sequence of the corresponding words in the text word segmentation sequence to obtain the user film watching vector.
Optionally, the filtering all the story line segments according to the playing weight, playing the filtered story line segments includes:
selecting a story line segment corresponding to the largest playing weight from the playing weights of all the story line segments to obtain a target story line segment;
in the target story line segment, determining a video frame which does not contain any one of the face information as an invalid video frame;
and sending the target story line segment and the position information of the contained invalid video frames to a preset playing device, so that the playing device plays the target story line segment, and playing the invalid video frames in the target story line segment according to a preset playing double speed.
Optionally, the playing the target storyline segment, and playing the invalid video frames in the target storyline segment according to a preset playing speed, including:
dividing a playing interface contained in the playing equipment into two playing areas to obtain a first playing area and a second playing area;
and playing the target story line segment in the first playing area, and playing all story line segments except the target story line segment in the second playing area.
In a second aspect, the present application provides a device for quick viewing, comprising:
the system comprises a departure weight calculation module, a video processing module and a video processing module, wherein the departure weight calculation module is used for acquiring a story line set and an actor information list of a film, and each story line in the story line set comprises a plurality of film and video fragments; combining all the video clips in each story line according to the time sequence to obtain a story line clip corresponding to each story line; calculating the weight of each actor in the actor information list in each story line segment to obtain a corresponding actor segment weight value set
The playing weight calculation module is used for calculating the playing weight of each story line segment according to the user viewing request and the actor segment weight value set when the user viewing request is received;
and the film playing module is used for screening all the story line segments according to the playing weight and playing the screened story line segments.
In a third aspect, an electronic device is provided, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
a processor, configured to implement the steps of the voice recognition method according to any one of the embodiments of the first aspect when executing the program stored in the memory.
In order to solve the above-mentioned problems, the present application also provides a computer-readable storage medium having stored therein at least one computer program that is executed by a processor in an electronic device to implement the video playback method described above.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages:
according to the method provided by the embodiment of the application, the playing weight corresponding to each story line segment is calculated according to the user video watching request and the actor segment weight value set, the favorite story line segments of the user are automatically matched, the user does not need to manually search, and the video playing efficiency is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the application or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, and it will be obvious to a person skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a detailed flowchart of a video playing method according to an embodiment of the present application.
Fig. 2 is a detailed flowchart of a playback weight obtained in a video playback method according to an embodiment of the present application.
Fig. 3 is a schematic block diagram of a video playing device according to an embodiment of the present application.
Fig. 4 is a schematic structural diagram of an electronic device for quick video viewing according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Fig. 1 is a flow chart of a video playing method according to an embodiment of the present application, where in the embodiment of the present application, the video playing method includes:
s1, acquiring a story line set and an actor information list of a film, wherein each story line in the story line set comprises a plurality of film and television fragments;
in detail, the film in the embodiment of the present application includes: and the actor information list is actor information of each starring in the film, and the actor information is corresponding actor information text and actor image information, wherein the actor information text is actor information text such as film type, starring personnel information, starring information and the like of the film and television drama in which the actors play.
Furthermore, since a plurality of fixed story lines are arranged in the movie and television play, the fixed story lines are interleaved, and the fixed story lines are integrated into the movie or television play in an interleaved manner to maintain the continuity of the plot, and to facilitate a user to quickly watch the movie, the embodiment of the application acquires a story line set of the movie, wherein the story line set is a set of different story lines in the movie, and the story line contains all movie fragments corresponding to the story line.
S2, combining all the video clips in each story line according to the time sequence to obtain a story line clip corresponding to each story line;
in the embodiment of the application, in order to ensure that a user can quickly browse the plot of different story lines in the film, all film and television segments in each story line are combined according to time sequence to obtain an initial story segment. For example: and the video clips corresponding to the story line A are respectively a 0-15 minute video clip, a 18-19 minute video clip and a 20-22 minute video clip in the film, and then the three video clips are combined into one video clip according to the time sequence to obtain the initial story line clip, and further, the initial story line clip is updated in a time axis to obtain the story line clip, for example: the initial story line segment is obtained by combining three video segments, namely a 0-15-minute video segment, a 18-19-minute video segment and a 21-22-minute video segment, and further, the time axis of the initial story line segment is updated to be 0-17 minutes according to the total duration of videos in the story line segment, so that the story line segment is obtained.
S3, calculating the departure weight of each actor in the actor information list in each story line segment to obtain a corresponding actor segment weight value set;
in detail, in the embodiment of the present application, calculating the weight of each actor appearing in each story line segment in the actor information list to obtain a corresponding actor segment weight value set includes:
step I: selecting one of the story line segments, framing the selected story line segment to obtain a plurality of video frames, and selecting actor image information of one actor from the actor information list;
step II: extracting face information in the actor image information;
alternatively, the embodiment of the present application may extract the face information in the actor image information by using currently known face recognition technology.
Step III: counting the total number of video frames in the selected story line segments;
step IV: counting the number of video frames containing the face information in the selected story line segment to obtain the number of target video frames;
optionally, the embodiment of the application uses a face recognition technology to determine whether the video frame contains the face information.
Step V: calculating an actor segment weight value of the selected actor in the selected story line segment according to the total number of the video frames and the target video frame number corresponding to the selected actor;
for example: the total number of video frames in the selected story line segment F is 100, and the corresponding number of target video frames in the story line segment F is 50 for actor a, then the actor segment weight value in the story line segment F for actor a is 50/100=0.5.
Step VI: and summarizing all actor segment weight values corresponding to each story line segment to obtain all actor segment weight value sets in each story line segment.
S4, when a user film watching request is received, calculating the playing weight of each story line segment according to the user film watching request and the actor segment weight value set;
in detail, in the embodiment of the present application, the user viewing request includes user information, where the user information is personal account information of a user.
In detail, referring to fig. 2, in the embodiment of the present application, according to the user viewing request and the actor segment weight value set, the calculating the playing weight of each story line segment includes:
s41, extracting user information in the user film watching request;
optionally, in the embodiment of the present application, the user information is account information of the user in the film watching program.
S42, inquiring user history viewing information corresponding to the user information in a preset history viewing information base;
optionally, in the embodiment of the present application, a query statement is constructed according to the user information, and user history viewing information corresponding to the user information in a history viewing information base preset by the query statement is used, where the history viewing information base is a database of user history viewing information of a certain viewing program, and optionally, in the embodiment of the present application, the user history viewing information is an information text such as a film type, main creator information, and main director information of a film that is viewed by a user history.
S43, carrying out vector conversion on the user history film watching information to obtain a user film watching vector;
in the embodiment of the application, the user viewing characteristics contained in the user history viewing information are discrete, and in order to accurately express the user viewing characteristics, the user history viewing information is subjected to vector conversion to obtain a user viewing vector, and meanwhile, text data is converted into the vector, so that the data size is reduced, and the consumption of computing resources is reduced.
Further, in the embodiment of the present application, vector conversion is performed on the user history viewing information to obtain the user viewing vector, including:
step A, word segmentation processing is carried out on the user history film watching information to obtain a text word segmentation set;
in the embodiment of the application, the user history film-viewing information is information text such as film type, main creator information, main director information and the like of films which are historically watched by the user.
Step B, combining each word in the text word segmentation set according to the sequence in the user history film viewing information to obtain a text word segmentation sequence;
the user history film watching data is "like watching horror film", and the text word segmentation set comprises: and combining the three words according to the sequence of the user history film watching data to obtain a text word segmentation sequence [ like, see horror film ].
Step C, converting each word in the text word segmentation sequence into a vector to obtain a text word vector, and combining all the text word vectors according to the sequence of the corresponding words in the text word segmentation sequence to obtain a user film watching vector;
optionally, the embodiment of the present application may convert each word in the text word segmentation sequence into a vector using a word2vec model that is trained.
For example: text word segmentation sequence is like, watch, horror tablet]The text word vector corresponding to like isThe text word vector corresponding to "see" is +.>The text word vector corresponding to "horror tablet" is->Then the user viewing vector is
Further, in the embodiment of the application, the text word vectors are only feature vectors of different words, so that in order to preserve context associated information of different words, user viewing features in user history viewing information are more accurately measured, and all the text word vectors are combined according to the sequence of the corresponding words in the text word segmentation sequence to obtain the user viewing vector.
Specifically, the embodiment of the application combines all the text word vectors according to the sequence of the corresponding words in the text word segmentation sequence to obtain the user film watching vector, and the method comprises the following steps: and carrying out arithmetic average calculation on all elements in the text word vector to obtain a vector characteristic value, for example: the text word vector isThen the text word vector corresponds to a vector feature value of (1+3+2)/3=2; and combining all the vector characteristic values according to the sequence of the corresponding words in the text word segmentation sequence to obtain a user viewing vector.
S44, acquiring an actor information text of each actor in the actor information list, and carrying out vector transformation on the actor information text to obtain actor feature vectors;
in the embodiment of the application, the actor information text is actor information text such as film type, main creator information, main performance information and the like of a movie drama in which actors participate.
S45, calculating the association degree of the user viewing vector and the actor feature vector to obtain actor association degree;
optionally, in the embodiment of the present application, the pearson correlation coefficient may be used to calculate the association degree between the user viewing vector and the actor feature vector, and in the embodiment of the present application, the preference degree of the user to the actor is measured by calculating the association degree between the user viewing vector and the actor feature vector.
S46, carrying out weighted calculation according to the actor association degree and each actor segment weight value set to obtain the play weight of the corresponding story line segment;
for example: a, B, C A are shared in the actor information list; the actor segment weight value corresponding to the actor corresponding to the story line segment is 0.3, the actor segment weight value corresponding to the actor B is 0.4, the actor segment weight value corresponding to the actor C is 0.5, the actor relevance corresponding to the actor a is 0.7, the actor relevance corresponding to the actor B is 0.8, and the actor relevance corresponding to the actor C is 0.9, then the playing weight corresponding to the story line segment is 0.3 x 0.7+0.4 x 0.8+0.5 x 0.9=0.98;
s5, screening all the story line segments according to the playing weight, and playing the screened story line segments.
In detail, in the embodiment of the present application, all the story line segments are filtered according to the playing weights, and the filtered story line segments are played, including:
selecting a story line segment corresponding to the largest playing weight from the playing weights of all the story line segments to obtain a target story line segment;
in the target story line segment, determining a video frame which does not contain any one of the face information as an invalid video frame;
and sending the target story line segment and the position information of the contained invalid video frames to a preset playing device, so that the playing device plays the target story line segment, and playing the invalid video frames in the target story line segment according to a preset playing double speed.
Optionally, in the embodiment of the present application, the playing device is an intelligent terminal device including a playing interface, for example: the position information is the position of the invalid video frame in the target story line segment, such as: the invalid video frames are the first frame and the second frame in the story line segment.
Further, in the embodiment of the present application, playing the target story line segment, and playing the invalid video frames in the target story line segment according to a preset playing speed, including:
dividing a playing interface contained in the playing equipment into two playing areas to obtain a first playing area and a second playing area;
optionally, in the embodiment of the present application, a preset playing interface may be divided into two rectangular playing areas by two modes, i.e., a horizontal or vertical playing interface, where the playing interface is a screen interface capable of performing video display.
And playing the target story line segment in the first playing area, and playing all story line segments except the target story line segment in the second playing area.
Optionally, in the embodiment of the present application, all the story line segments except the target story line segment are combined and played in the second playing area, or the second playing area is further divided into different playing areas, where each divided playing area plays one story line segment.
In the embodiment of the application, the preset playing interface is divided into two playing areas by a split screen.
Optionally, in the embodiment of the present application, the playing interface may be divided into two playing areas according to a preset dividing ratio, for example: and equally dividing the playing interface into two playing areas.
For example: a total of A, B, C three story line segments, wherein the A story line segment is a target story line segment, and then the target story line segment is played in the first playing area; and playing the story line segments B and C in the second playing area, wherein the story line segments B and C can be combined and then played in the second playing area or the second playing area is further divided into two playing areas, one divided playing area plays the story line segment B, and the other divided playing area plays the story line segment C. .
Further, when the corresponding story line segment is played in the playing area, the embodiment of the application plays the invalid video frames in the story line segment according to the preset playing speed.
Optionally, in the embodiment of the present application, when the user clicks the playing area played by each story line segment, the clicked playing area is enlarged according to a preset enlargement ratio, for example, the area is enlarged to be the whole area of the playing interface, so that the user can conveniently adjust the viewed story line user segment.
Fig. 3 is a functional block diagram of the video playing device according to the present application.
Depending on the implemented functions, the video playback device 100 may include a presence weight calculation module 101, a playback weight calculation module 102, and a movie playback module 103. The module of the application, which may also be referred to as a unit, refers to a series of computer program segments, which are stored in the memory of the electronic device, capable of being executed by the processor of the electronic device and of performing a fixed function.
In the present embodiment, the functions concerning the respective modules/units are as follows:
the departure weight calculation module 101 is configured to obtain a story line set of a movie and an actor information list, where each story line in the story line set includes a plurality of movie fragments; combining all the video clips in each story line according to the time sequence to obtain a story line clip corresponding to each story line; calculating the weight of each actor in the actor information list in each story line segment to obtain a corresponding actor segment weight value set
The play weight calculation module 102 is configured to calculate, when a user viewing request is received, a play weight of each of the story line segments according to the user viewing request and the actor segment weight value set;
the film playing module 103 is configured to screen all the story line segments according to the playing weights, and play the screened story line segments.
In detail, each module of the video playing device 100 in the embodiment of the present application adopts the same technical means as the video playing method described in fig. 1 to 2, and can produce the same technical effects, which are not repeated here.
As shown in fig. 4, an embodiment of the present application provides an electronic device including a processor 111, a communication interface 112, a memory 113, and a communication bus 114, wherein the processor 111, the communication interface 112, and the memory 113 perform communication with each other through the communication bus 114,
a memory 113 for storing a computer program;
in one embodiment of the present application, the processor 111 is configured to implement the video playing method provided in any one of the foregoing method embodiments when executing the program stored in the memory 113, where the method includes:
acquiring a story line set and an actor information list of a film, wherein each story line in the story line set comprises a plurality of film and television fragments;
combining all the video clips in each story line according to the time sequence to obtain a story line clip corresponding to each story line;
calculating the weight of each actor in the actor information list in each story line segment to obtain a corresponding actor segment weight set;
when a user video watching request is received, calculating the playing weight of each story line segment according to the user video watching request and the actor segment weight value set;
and screening all the story line segments according to the playing weight, and playing the screened story line segments.
The communication bus 114 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industrial Standard Architecture (EISA) bus, or the like. The communication bus 114 may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface 112 is used for communication between the above-described electronic device and other devices.
The memory 113 may include a Random Access Memory (RAM) or a nonvolatile memory (non-volatile memory), such as at least one disk memory. Alternatively, the memory 113 may be at least one memory device located remotely from the processor 111.
The processor 111 may be a general-purpose processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSP), application Specific Integrated Circuits (ASIC), field-programmable gate arrays (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
The embodiment of the application also provides a computer readable storage medium, which is characterized in that the computer readable storage medium stores one or more programs, and the one or more programs can be executed by one or more processors to implement the video playing method of any of the above embodiments.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.) means from one website, computer, server, or data center. Computer readable storage media can be any available media that can be accessed by a computer or data storage devices, such as servers, data centers, etc., that contain an integration of one or more available media. Usable media may be magnetic media (e.g., floppy disks, hard disks, magnetic tape), optical media (e.g., DVD), or semiconductor media (e.g., solid state disk SolidStateDisk (SSD)), among others.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing is only a specific embodiment of the application to enable those skilled in the art to understand or practice the application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (9)
1. A video playing method, the method comprising:
acquiring a story line set and an actor information list of a film, wherein each story line in the story line set comprises a plurality of film and television fragments;
combining all the video clips in each story line according to the time sequence to obtain a story line clip corresponding to each story line;
calculating the weight of each actor in the actor information list in each story line segment to obtain a corresponding actor segment weight set;
when a user viewing request is received, extracting user information in the user viewing request;
inquiring user history viewing information corresponding to the user information in a preset history viewing information base;
vector conversion is carried out on the user history film watching information to obtain a user film watching vector;
acquiring an actor information text of each actor in the actor information list, and carrying out vector transformation on the actor information text to obtain an actor feature vector;
calculating the association degree of the user viewing vector and the actor feature vector to obtain actor association degree;
weighting calculation is carried out according to the actor association degree and each actor segment weight value set, and the playing weight of the corresponding story line segment is obtained;
and screening all the story line segments according to the playing weight, and playing the screened story line segments.
2. The video playing method as set forth in claim 1, wherein said calculating the weight of each actor in the actor information list for each of the story line segments to obtain a corresponding actor segment weight value set includes:
selecting one of the story line segments, framing the selected story line segment to obtain a plurality of video frames, and selecting actor image information of one actor from the actor information list;
extracting face information in the actor image information;
counting the total number of video frames in the selected story line segments;
counting the number of video frames containing the face information in the selected story line segment to obtain the number of target video frames;
calculating an actor segment weight value of the selected actor in the selected story line segment according to the total number of the video frames and the target video frame number corresponding to the selected actor;
and summarizing all actor segment weight values corresponding to each story line segment to obtain all actor segment weight value sets in each story line segment.
3. The method of claim 2, wherein the performing vector conversion on the user history viewing information to obtain a user viewing vector comprises:
word segmentation processing is carried out on the user history film watching information to obtain a text word segmentation set;
combining each word in the text word segmentation set according to the sequence in the user history film viewing information to obtain a text word segmentation sequence;
converting each word in the text word segmentation sequence into a vector to obtain a text word vector;
and combining all the text word vectors according to the sequence of the corresponding words in the text word segmentation sequence to obtain a user viewing vector.
4. The method of claim 3, wherein the combining all the text word vectors according to the sequence of the corresponding words in the text word segmentation sequence to obtain the user viewing vector comprises:
carrying out arithmetic average calculation on all elements in the text word vector to obtain a vector characteristic value;
and combining all the vector characteristic values according to the sequence of the corresponding words in the text word segmentation sequence to obtain the user film watching vector.
5. The video playing method according to any one of claims 2 to 4, wherein the step of screening all the story line segments according to the playing weights, playing the screened story line segments, includes:
selecting a story line segment corresponding to the largest playing weight from the playing weights of all the story line segments to obtain a target story line segment;
in the target story line segment, determining a video frame which does not contain any one of the face information as an invalid video frame;
and sending the target story line segment and the position information of the contained invalid video frames to a preset playing device, so that the playing device plays the target story line segment, and playing the invalid video frames in the target story line segment according to a preset playing double speed.
6. The video playing method according to claim 5, wherein the playing the target story line segment and playing the invalid video frames in the target story line segment at a preset playing speed multiple comprises:
dividing a playing interface contained in the playing equipment into two playing areas to obtain a first playing area and a second playing area;
and playing the target story line segment in the first playing area, and playing all story line segments except the target story line segment in the second playing area.
7. A video playback device, comprising:
the system comprises a departure weight calculation module, a video processing module and a video processing module, wherein the departure weight calculation module is used for acquiring a story line set and an actor information list of a film, and each story line in the story line set comprises a plurality of film and video fragments; combining all the video clips in each story line according to the time sequence to obtain a story line clip corresponding to each story line; calculating the weight of each actor in the actor information list in each story line segment to obtain a corresponding actor segment weight set;
the playing weight calculation module is used for extracting user information in the user viewing request when the user viewing request is received; inquiring user history viewing information corresponding to the user information in a preset history viewing information base; vector conversion is carried out on the user history film watching information to obtain a user film watching vector; acquiring an actor information text of each actor in the actor information list, and carrying out vector transformation on the actor information text to obtain an actor feature vector; calculating the association degree of the user viewing vector and the actor feature vector to obtain actor association degree; weighting calculation is carried out according to the actor association degree and each actor segment weight value set, and the playing weight of the corresponding story line segment is obtained;
and the film playing module is used for screening all the story line segments according to the playing weight and playing the screened story line segments.
8. The electronic equipment is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
a memory for storing a computer program;
a processor for implementing the steps of the video playing method according to any one of claims 1 to 6 when executing a program stored on a memory.
9. A computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the video playback method of any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111194498.6A CN113938712B (en) | 2021-10-13 | 2021-10-13 | Video playing method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111194498.6A CN113938712B (en) | 2021-10-13 | 2021-10-13 | Video playing method and device and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113938712A CN113938712A (en) | 2022-01-14 |
CN113938712B true CN113938712B (en) | 2023-10-10 |
Family
ID=79279152
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111194498.6A Active CN113938712B (en) | 2021-10-13 | 2021-10-13 | Video playing method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113938712B (en) |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105744292A (en) * | 2016-02-02 | 2016-07-06 | 广东欧珀移动通信有限公司 | Video data processing method and device |
CN107241622A (en) * | 2016-03-29 | 2017-10-10 | 北京三星通信技术研究有限公司 | video location processing method, terminal device and cloud server |
CN107820138A (en) * | 2017-11-06 | 2018-03-20 | 广东欧珀移动通信有限公司 | Video broadcasting method, device, terminal and storage medium |
CN108271069A (en) * | 2017-12-11 | 2018-07-10 | 北京奇艺世纪科技有限公司 | The segment filter method and device of a kind of video frequency program |
CN108401193A (en) * | 2018-03-21 | 2018-08-14 | 北京奇艺世纪科技有限公司 | A kind of video broadcasting method, device and electronic equipment |
CN108471544A (en) * | 2018-03-28 | 2018-08-31 | 北京奇艺世纪科技有限公司 | A kind of structure video user portrait method and device |
CN108933970A (en) * | 2017-05-27 | 2018-12-04 | 北京搜狗科技发展有限公司 | The generation method and device of video |
CN109275047A (en) * | 2018-09-13 | 2019-01-25 | 周昕 | Video information processing method and device, electronic equipment, storage medium |
WO2019144838A1 (en) * | 2018-01-24 | 2019-08-01 | 北京一览科技有限公司 | Method and apparatus for use in acquiring evaluation result information of video |
CN110557683A (en) * | 2019-09-19 | 2019-12-10 | 维沃移动通信有限公司 | Video playing control method and electronic equipment |
CN111314784A (en) * | 2020-02-28 | 2020-06-19 | 维沃移动通信有限公司 | Video playing method and electronic equipment |
CN111711856A (en) * | 2020-08-19 | 2020-09-25 | 深圳电通信息技术有限公司 | Interactive video production method, device, terminal, storage medium and player |
CN112887780A (en) * | 2021-01-21 | 2021-06-01 | 维沃移动通信有限公司 | Video name display method and device |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030107592A1 (en) * | 2001-12-11 | 2003-06-12 | Koninklijke Philips Electronics N.V. | System and method for retrieving information related to persons in video programs |
US8867901B2 (en) * | 2010-02-05 | 2014-10-21 | Theatrics. com LLC | Mass participation movies |
EP3291110A1 (en) * | 2016-09-02 | 2018-03-07 | OpenTV, Inc. | Content recommendations using personas |
KR102161784B1 (en) * | 2017-01-25 | 2020-10-05 | 한국전자통신연구원 | Apparatus and method for servicing content map using story graph of video content and user structure query |
CN108337532A (en) * | 2018-02-13 | 2018-07-27 | 腾讯科技(深圳)有限公司 | Perform mask method, video broadcasting method, the apparatus and system of segment |
-
2021
- 2021-10-13 CN CN202111194498.6A patent/CN113938712B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105744292A (en) * | 2016-02-02 | 2016-07-06 | 广东欧珀移动通信有限公司 | Video data processing method and device |
CN107241622A (en) * | 2016-03-29 | 2017-10-10 | 北京三星通信技术研究有限公司 | video location processing method, terminal device and cloud server |
CN108933970A (en) * | 2017-05-27 | 2018-12-04 | 北京搜狗科技发展有限公司 | The generation method and device of video |
CN107820138A (en) * | 2017-11-06 | 2018-03-20 | 广东欧珀移动通信有限公司 | Video broadcasting method, device, terminal and storage medium |
CN108271069A (en) * | 2017-12-11 | 2018-07-10 | 北京奇艺世纪科技有限公司 | The segment filter method and device of a kind of video frequency program |
WO2019144838A1 (en) * | 2018-01-24 | 2019-08-01 | 北京一览科技有限公司 | Method and apparatus for use in acquiring evaluation result information of video |
CN108401193A (en) * | 2018-03-21 | 2018-08-14 | 北京奇艺世纪科技有限公司 | A kind of video broadcasting method, device and electronic equipment |
CN108471544A (en) * | 2018-03-28 | 2018-08-31 | 北京奇艺世纪科技有限公司 | A kind of structure video user portrait method and device |
CN109275047A (en) * | 2018-09-13 | 2019-01-25 | 周昕 | Video information processing method and device, electronic equipment, storage medium |
CN110557683A (en) * | 2019-09-19 | 2019-12-10 | 维沃移动通信有限公司 | Video playing control method and electronic equipment |
CN111314784A (en) * | 2020-02-28 | 2020-06-19 | 维沃移动通信有限公司 | Video playing method and electronic equipment |
CN111711856A (en) * | 2020-08-19 | 2020-09-25 | 深圳电通信息技术有限公司 | Interactive video production method, device, terminal, storage medium and player |
CN112887780A (en) * | 2021-01-21 | 2021-06-01 | 维沃移动通信有限公司 | Video name display method and device |
Also Published As
Publication number | Publication date |
---|---|
CN113938712A (en) | 2022-01-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN1538351B (en) | Method and computer for generating visually representative video thumbnails | |
US9740775B2 (en) | Video retrieval based on optimized selected fingerprints | |
RU2577189C2 (en) | Profile based content retrieval for recommender systems | |
US9202523B2 (en) | Method and apparatus for providing information related to broadcast programs | |
US20160210284A1 (en) | System and method for capturing a multimedia content item by a mobile device and matching sequentially relevant content to the multimedia content item | |
CN111279709B (en) | Providing video recommendations | |
US10524005B2 (en) | Facilitating television based interaction with social networking tools | |
CN111523566A (en) | Target video clip positioning method and device | |
WO2019134587A1 (en) | Method and device for video data processing, electronic device, and storage medium | |
CN110309795A (en) | Video detecting method, device, electronic equipment and storage medium | |
CN112507163B (en) | Duration prediction model training method, recommendation method, device, equipment and medium | |
KR101541495B1 (en) | Apparatus, method and computer readable recording medium for analyzing a video using the image captured from the video | |
JP2009140042A (en) | Information processing apparatus, information processing method, and program | |
US20120042041A1 (en) | Information processing apparatus, information processing system, information processing method, and program | |
CN108197336B (en) | Video searching method and device | |
US20210006948A1 (en) | Providing a summary of media content to a communication device | |
CN109597929A (en) | Methods of exhibiting, device, terminal and the readable medium of search result | |
WO2020135189A1 (en) | Product recommendation method, product recommendation system and storage medium | |
CN111291217B (en) | Content recommendation method, device, electronic equipment and computer readable medium | |
CN113938712B (en) | Video playing method and device and electronic equipment | |
US20110276557A1 (en) | Method and apparatus for exchanging media service queries | |
CN109963174B (en) | Flow related index estimation method and device and computer readable storage medium | |
CN116049490A (en) | Material searching method and device and electronic equipment | |
CN103581744A (en) | Method for acquiring data and electronic equipment | |
CN110309361B (en) | Video scoring determination method, recommendation method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |