Nothing Special   »   [go: up one dir, main page]

CN112333481B - Video pushing method and device, server and storage medium - Google Patents

Video pushing method and device, server and storage medium Download PDF

Info

Publication number
CN112333481B
CN112333481B CN202011045153.XA CN202011045153A CN112333481B CN 112333481 B CN112333481 B CN 112333481B CN 202011045153 A CN202011045153 A CN 202011045153A CN 112333481 B CN112333481 B CN 112333481B
Authority
CN
China
Prior art keywords
video
user
definition
sample
client
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011045153.XA
Other languages
Chinese (zh)
Other versions
CN112333481A (en
Inventor
马茗
郭君健
于冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202011045153.XA priority Critical patent/CN112333481B/en
Publication of CN112333481A publication Critical patent/CN112333481A/en
Application granted granted Critical
Publication of CN112333481B publication Critical patent/CN112333481B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • H04N21/25891Management of end-user data being end-user preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25808Management of client data

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The disclosure relates to a video pushing method and device. The method comprises the following steps: receiving a video downloading request sent by a client; acquiring user characteristics of a target user according to a video downloading request; inputting the user characteristics into the first prediction model and the second prediction model respectively to obtain a first prediction value and a second prediction value of the stay time of a target user on a client; after the first prediction model is learned to obtain the mapping relation between the stay time of the experimental group sample users on the client after watching the sample video with the highest definition as the first definition and the user characteristics of the experimental group sample users; the second prediction model learns the mapping relation between the stay time of the comparison group sample user on the client after watching the sample video with the highest definition as the second definition and the user characteristics of the comparison group sample user; and determining the highest definition of the video which can be presented by the client according to the first and second predicted values, and sending the video to the client according to the highest definition.

Description

Video pushing method and device, server and storage medium
Technical Field
The present disclosure relates to the field of video processing technologies, and in particular, to a video pushing method, an apparatus, a server, and a storage medium.
Background
In recent years, short video services mainly based on UGC (User Generated Content) have been rapidly developed. Since the highest definition that a short video can be played at the consumer side is the definition that the work is produced (e.g. 1080p for the same resolution and 60fps for the same frame rate). In the related art, a short video definition improving scheme usually adopts a high-level transcoding compression algorithm, and outputs content with the same definition and subjective experience (i.e. the same resolution and frame rate) as that of a produced product to a user for consumption at a lower code rate. However, the above-mentioned video recommendation method has problems that the definition of the pushed video does not meet the actual situation of the user, and the pushing effect is poor.
Disclosure of Invention
The present disclosure provides a video pushing method, apparatus, server and storage medium, to at least solve the problems in the related art that the definition of the pushed video does not meet the actual situation of the user, the pushing effect is poor, and the like. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided a video push method, including:
receiving a video downloading request sent by a client, wherein the video downloading request is sent by the client in response to a video playing request of a target user;
acquiring the user characteristics of the target user according to the video downloading request;
inputting the user characteristics of the target user into a first prediction model to obtain a first prediction value of the stay time of the target user on the client; the first prediction model learns the mapping relation between the stay time of an experimental group sample user on the client after watching a sample video with the highest definition as the first definition and the user characteristics of the experimental group sample user;
inputting the user characteristics of the target user into a second prediction model to obtain a second prediction value of the stay time of the target user on the client; the second prediction model learns the mapping relation between the stay time of a contrast group sample user on the client after watching a sample video with the highest definition as the second definition and the user characteristics of the contrast group sample user; wherein the first definition is higher than the second definition;
and the client can present the highest definition of the video according to the first predicted value and the second predicted value, and sends the video to the client according to the highest definition.
According to some embodiments of the present disclosure, before the obtaining the user characteristics of the target user according to the video download request, the method further includes:
acquiring the screen resolution and/or the model performance of the terminal equipment held by the target user according to the video downloading request;
and when the screen resolution is greater than or equal to a target screen resolution and/or the model performance meets a preset condition, executing the step of acquiring the user characteristics of the target user according to the video downloading request.
According to some embodiments of the disclosure, the method further comprises:
when the screen resolution is smaller than the target screen resolution and/or the model performance does not meet the preset condition, determining the highest definition corresponding to the screen resolution and/or the model performance;
and sending the video to the client according to the highest definition corresponding to the screen resolution and/or the model performance.
According to some embodiments of the present disclosure, the determining a highest sharpness at which the client may present the video according to the first prediction value and the second prediction value comprises:
calculating a first difference between the first predicted value and the second predicted value; the first difference value is used for representing the influence of the target user on the stay time of the target user on the client after the target user watches the video with the first definition;
and selecting one of the first definition and the second definition as the highest definition of the video which can be presented by the client according to the first difference value and a target threshold.
In this embodiment of the disclosure, said selecting one of the first definition and the second definition as the highest definition at which the client can present the video according to the first difference and a target threshold includes:
comparing the first difference value with the target threshold value in size;
when the first difference is greater than or equal to the target threshold, taking the first definition as the highest definition at which the client can present the video;
when the first difference is less than the target threshold, the second definition is taken as the highest definition at which the client can present the video.
In the embodiment of the present disclosure, the target threshold is obtained by:
acquiring user characteristics of sample users;
inputting the user characteristics of the sample user into the first prediction model and the second prediction model respectively to obtain a first prediction value and a second prediction value of the stay time of the sample user on the client;
performing difference value calculation on the first predicted value and the second predicted value of the sample user to obtain a second difference value, wherein the second difference value is used for representing the influence of the sample user on the stay time of the sample user on the client after the sample user watches the sample video with the highest definition being the first definition;
obtaining target sample users of which the second difference is larger than a threshold value from the sample users;
acquiring a service added value generated after a target sample user watches the sample video with the highest definition as the first definition;
updating the threshold value according to the service increment value, and taking the updated threshold value as a new threshold value;
and returning to the step of obtaining the target sample user with the second difference value larger than the threshold value from the sample users until the generated service increment is smaller than or equal to the target value, and taking the latest updated threshold value as the target threshold value.
According to some embodiments of the disclosure, the sending the video to the client according to the highest definition comprises:
sending the version corresponding to the highest definition of the video to the client; or,
and sending at least one definition version in the versions with the definition of the video being less than or equal to the definition corresponding to the highest definition to the client.
According to some embodiments of the disclosure, the first predictive model is obtained by training:
acquiring user characteristics and sample labels of sample users of an experimental group; the sample label is used for indicating the stay time of the experimental group sample user on the client after watching the sample video with the highest definition being the first definition;
inputting the user characteristics of the experimental group sample users into a neural network model for prediction to obtain a predicted value of the stay time of the experimental group sample users on the client after watching the sample video with the highest definition as the first definition;
calculating a loss value between the sample label of the experimental group sample user and the predicted value according to a preset algorithm;
and training the neural network model according to the loss value and a preset target function, and obtaining model parameters to generate the first prediction model.
According to some embodiments of the disclosure, the second predictive model is obtained by training:
acquiring user characteristics and sample labels of comparison group sample users; the sample label is used for indicating the stay time of the control group sample users on the client after watching the sample video with the highest definition being the second definition;
inputting the user characteristics of the comparison group sample users into a neural network model for prediction to obtain a predicted value of the stay time of the comparison group sample users on the client after watching the sample video with the highest definition as the second definition;
calculating a loss value between the sample label of the comparison group sample user and the predicted value according to a preset algorithm;
and training the neural network model according to the loss value and a preset target function, and obtaining model parameters to generate the second prediction model.
According to some embodiments of the disclosure, the user features include:
the device information of the terminal device, the network information of the terminal device, the video definition requirement and the user portrait.
According to a second aspect of the embodiments of the present disclosure, there is provided a video push apparatus including:
the device comprises a receiving module, a processing module and a display module, wherein the receiving module is configured to receive a video downloading request sent by a client, and the video downloading request is sent by the client in response to a video playing request of a target user;
a first obtaining module configured to obtain a user characteristic of the target user according to the video downloading request;
the second obtaining module is configured to input the user characteristics of the target user into a first prediction model, and obtain a first prediction value of the stay time of the target user on the client; the first prediction model learns the mapping relation between the stay time of an experimental group sample user on the client after watching a sample video with the highest definition as the first definition and the user characteristics of the experimental group sample user;
a third obtaining module, configured to input the user characteristics of the target user into a second prediction model, and obtain a second prediction value of a stay time of the target user on the client; the second prediction model learns the mapping relation between the stay time of a contrast group sample user on the client after watching a sample video with the highest definition as the second definition and the user characteristics of the contrast group sample user; wherein the first definition is higher than the second definition;
a determination module configured to determine a highest sharpness at which the client may present the video according to the first prediction value and the second prediction value; and
a push module configured to send the video to the client according to the highest definition.
In some embodiments according to the disclosure, the apparatus further comprises:
the fourth obtaining module is configured to obtain the screen resolution and/or the model performance of the terminal device held by the target user according to the video downloading request;
the first obtaining module is further configured to execute the step of obtaining the user characteristics of the target user according to the video downloading request when the screen resolution is greater than or equal to a target screen resolution and/or the model performance meets a preset condition.
According to some embodiments of the present disclosure, the determining module is further configured to determine a highest definition corresponding to the screen resolution and/or the model performance when the screen resolution is less than the target screen resolution and/or the model performance does not satisfy the preset condition;
the push module is further configured to send the video to the client according to a highest definition corresponding to the screen resolution and/or the model performance.
According to some embodiments of the disclosure, the determining module comprises:
a calculation unit configured to calculate a first difference value between the first prediction value and the second prediction value, wherein; the first difference value is used for representing the influence of the target user on the stay time of the target user on the client after the target user watches the video with the first definition;
a determining unit configured to select one of the first definition and the second definition as a highest definition at which the client can present the video according to the first difference and a target threshold.
According to some embodiments of the present disclosure, the determining unit is specifically configured to:
comparing the first difference value with the target threshold value in size;
when the first difference is greater than or equal to the target threshold, taking the first definition as the highest definition at which the client can present the video;
when the first difference is less than the target threshold, the second definition is taken as the highest definition at which the client can present the video.
In accordance with some embodiments of the disclosure, the apparatus further comprises:
a target threshold acquisition module configured to predetermine the target threshold;
wherein the target threshold acquisition module is specifically configured to:
acquiring user characteristics of sample users;
inputting the user characteristics of the sample user into the first prediction model and the second prediction model respectively to obtain a first prediction value and a second prediction value of the stay time of the sample user on the client;
performing difference value calculation on the first predicted value and the second predicted value of the sample user to obtain a second difference value, wherein the second difference value is used for representing the influence of the sample user on the stay time of the sample user on the client after the sample user watches the sample video with the highest definition being the first definition;
obtaining target sample users of which the second difference values are larger than a threshold value from the sample users;
acquiring a service added value generated after a target sample user watches the sample video with the highest definition as the first definition;
updating the threshold value according to the service increment value, and taking the updated threshold value as a new threshold value;
and returning to the step of obtaining the target sample user with the second difference value larger than the threshold value from the sample users until the generated service increment is smaller than or equal to the target value, and taking the latest updated threshold value as the target threshold value.
According to some embodiments of the present disclosure, the push module is specifically configured to:
sending the version corresponding to the highest definition of the video to the client; or,
and sending at least one definition version of the versions with the definition less than or equal to the definition corresponding to the highest definition to the client.
In some embodiments according to the disclosure, the apparatus further comprises:
a first training module configured to pre-train the first predictive model;
wherein the first training module is specifically configured to:
acquiring user characteristics and sample labels of sample users in an experimental group; the sample label is used for indicating the stay time of the experimental group sample user on the client after watching the sample video with the highest definition being the first definition;
inputting the user characteristics of the experimental group sample users into a neural network model for prediction to obtain a predicted value of the stay time of the experimental group sample users on the client after watching the sample video with the highest definition as the first definition;
calculating a loss value between the sample label of the experimental group sample user and the predicted value according to a preset algorithm;
and training the neural network model according to the loss value and a preset objective function, and obtaining model parameters to generate the first prediction model.
In some embodiments according to the disclosure, the apparatus further comprises:
a second training module configured to pre-train the second predictive model;
wherein the second training module is specifically configured to:
acquiring user characteristics and sample labels of comparison group sample users; the sample label is used for indicating the stay time of the control group sample users on the client after watching the sample video with the highest definition being the second definition;
inputting the user characteristics of the comparison group sample users into a neural network model for prediction to obtain a predicted value of the stay time of the comparison group sample users on the client after watching the sample video with the highest definition as the second definition;
calculating a loss value between the sample label of the comparison group sample user and the predicted value according to a preset algorithm;
and training the neural network model according to the loss value and a preset target function, and obtaining model parameters to generate the second prediction model.
According to some embodiments of the disclosure, the user features include:
the device information of the terminal device, the network information of the terminal device, the video definition requirement and the user portrait.
According to a third aspect of the embodiments of the present disclosure, there is provided a server, including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the video push method in the embodiment of the first aspect of the present disclosure.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a storage medium, where instructions, when executed by a processor of a server, enable the server to perform the video push method described in the first aspect of the present disclosure.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product, wherein when instructions of the computer program product are executed by a processor, the video push method according to the first aspect is performed.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
the method comprises the steps of receiving a video downloading request sent by a client, obtaining user characteristics of a user according to the video downloading request, inputting the user characteristics into a first prediction model and a second prediction model respectively, obtaining a first prediction value and a second prediction value of the stay time of the user on the client, determining the highest definition of the video which can be presented by the client according to the first prediction value and the second prediction value, and sending the video to the client according to the highest definition. Therefore, the embodiment of the disclosure recommends videos conforming to the user characteristics for different users under the condition of comprehensively considering the user characteristics so as to realize the delivery of self-adaptive video definition versions, reasonably promotes the definition of a short video service platform, and ensures the maximization of a service added value ROI while promoting the playing experience of the users, thereby solving the problems that the pushed video definition does not conform to the actual conditions of the users, the pushing effect is poor and the like in the related technology.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
Fig. 1 is an exemplary diagram illustrating a network device in accordance with one illustrative embodiment.
Fig. 2 is a diagram illustrating an example of a process from production to distribution consumption of short videos according to an example embodiment.
Fig. 3 is a flow chart illustrating a video push method according to an example embodiment.
Fig. 4 is a flow diagram illustrating another video push method in accordance with an example embodiment.
FIG. 5 is a flow diagram illustrating the determination of a target threshold in accordance with an exemplary embodiment.
FIG. 6 is a diagram illustrating an example of corresponding code for determining a target threshold in accordance with one illustrative embodiment.
Fig. 7 is a block diagram illustrating a video push device according to an example embodiment.
Fig. 8 is a block diagram illustrating another video push apparatus according to an example embodiment.
Fig. 9 is a block diagram illustrating yet another video push apparatus according to an example embodiment.
Fig. 10 is a block diagram illustrating another video push apparatus according to an example embodiment.
Fig. 11 is a block diagram illustrating another video push apparatus according to an example embodiment.
FIG. 12 is a block diagram illustrating a server in accordance with an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
In the description of the present disclosure, the term "terminal device" refers to a hardware device having various operating systems, such as a mobile phone, a tablet computer, a personal digital assistant, a wearable device, and the like.
In the description of the present disclosure, the term "client" refers to, for example, a short video client, and refers to a short video service platform based on UGC (User Generated Content), and users using the client are divided into two types, one type is a short video Content production side User and the other type is a short video Content consumption side User. The short video content producing side user can produce and release original short video content on the short video client, and the short video content consuming side user can watch and browse the short video produced by the short video content producing side user on the short video client.
In recent years, short video services such as UGC-based short video services have been rapidly developed. Since the highest definition that a short video can be played at the consumer side is the definition that the work is produced (e.g. 1080p for the same resolution, 60fps for the same frame rate). In the related art, a short video definition enhancement scheme usually adopts a high-level transcoding compression algorithm, and outputs content with the same definition subjective experience (namely the same resolution and frame rate) as that of a production product to a user for consumption at a lower code rate. However, the above video recommendation method does not consider the model diversification of the short video content consumption side user, for example, the resolution of the mobile phone screen of the short video content consumption side user is lower than the resolution of the played works, so that a higher definition work user is provided without perception; in addition, the video recommendation method does not consider personalized user requirements, different users have different requirements on definition, if all users can see high-definition works when seeing high-definition works, on one hand, strong bandwidth cost pressure of a Content Delivery Network (CDN) is generated, and on the other hand, when some users do not have great requirements on high-definition works, files of high-definition versions are relatively large, so that users at the short video Content consumption side are more likely to generate a stuck playing experience when watching short videos.
In order to reasonably improve the definition of a short video service platform and solve the problems that the definition of a pushed video does not accord with the actual situation of a user and the pushing effect is poor in the related technology, the video pushing method, the video pushing device, the video pushing server and the video pushing storage medium are provided by comprehensively considering two factors of different users with different requirements on the definition and different bandwidth costs generated by different definition gears. That is to say, the present disclosure mainly considers that in the face of videos with different definitions generated by short video content production side users, adaptive video definition versions are issued according to different model support degrees, network support degrees and definition requirements of different short video content consumption side users.
Referring to fig. 1, a network device according to the technical solution of the present disclosure may include: the system comprises a first terminal 11, a server 12 and a second terminal 13, wherein the first terminal 11 is a terminal device held by a short video content production side user, the second terminal 13 is a terminal device held by a short video content consumption side user, the short video content production side user produces and releases original short video content through a short video client installed on the first terminal 11, and the short video content consumption side user watches and browses short videos produced by the short video content production side user through the short video client installed on the second terminal 13. The server 12 receives the short video works produced by the short video content producing side user sent by the first terminal 11, and distributes the short video works produced by the short video content producing side user to the terminal device held by the short video content consuming side user through a certain distribution strategy. The distribution policy may include, but is not limited to, the server 12 transcoding the short video works produced by the short video content producing side users sent by the first terminal 11 into different definition versions, so as to issue the adaptive video definition versions to different short video content consuming side users.
In order to solve the problems that the pushed video definition does not accord with the actual situation of a user and the pushing effect is poor in the related technology, the method improves the distribution strategy of the server side, namely comprehensively considers the user characteristics of different users and recommends videos which accord with the user characteristics for the different users so as to realize the self-adaptive video definition version issuing. Among other things, the user characteristics may include, but are not limited to: one or more of device information of a held terminal device, network information of the held terminal device, video definition requirements, user portrayal, and the like. The device information may include, but is not limited to, model, chip, memory, storage, price, etc.; the network information may include, but is not limited to, wi-Fi (Wireless Fidelity) or Mobile data network (e.g., 4G (The 4th Generation Mobile Communication Technology, fourth Generation Mobile Communication Technology), 5G (5 th Generation Mobile Networks, fifth Generation Mobile Communication Technology), etc.), signal strength, etc.; the video definition requirement refers to average American scores of consumer products (namely short videos browsed by short video content consumer side users), high-definition product proportion, producer proportion, first-line producer proportion and the like. The user representation may include, but is not limited to, gender, age, city rating, city or country, north and south, number of video plays over the past 7 days, number of active days, etc.
Referring to fig. 2, a process from production to distribution consumption of a short video according to an embodiment of the present disclosure may be as shown in fig. 2, and a short video content production side user may produce a short video content and upload the produced short video content to a server. After receiving the short video works uploaded by the short video content production side users, the server can transcode short video works with the resolution lower than or equal to the resolution, such as 1080p works, and transcode definition versions with three gears of 1080p, 720p and 540p, aiming at different video short videos of the short video content production side users; such as a 720p work, may be transcoded into 720p, 540p two-position sharpness versions. After transcoding the short video works uploaded by the short video content production side user, the server can issue different definition gear videos for different consumption side users according to content distribution gear decisions, and the consumption side users can watch the definition gear videos issued by the server. It should be noted that, the present disclosure specifically improves content distribution gear decisions, that is, comprehensively considers user characteristics to decide what definition gear each video is issued for different users. The specific implementation can refer to the following description of the embodiments shown in fig. 3 to 10.
Fig. 3 is a flowchart illustrating a video push method according to an exemplary embodiment, where, as shown in fig. 3, the video push method is used in a server according to an embodiment of the present disclosure, and includes the following steps:
in step S301, a video download request sent by a client is received, where the video download request is sent by the client in response to a video play request of a target user.
For example, when a target user requests video playing on a client, the client may send a video download request to the server in response to the video playing request, thereby enabling the server to receive the video download request sent by the client.
In step S302, the user characteristics of the target user are acquired according to the video download request.
The obtaining mode of the user characteristics of the target user may include multiple types:
as an example of one possible implementation manner, the video download request may carry a user characteristic, that is, when the client monitors that the target user performs a video playing operation and then sends the video download request to the server, the video download request may carry the user characteristic of the target user. When the server receives the video downloading request, the user characteristics of the target user can be directly obtained from the video downloading request, that is, when the target user requests video playing on the client, the client can actively send the user characteristics of the target user to the server.
As another example of possible implementation, when receiving a video downloading request sent by the client, the server may send an obtaining request of user characteristics to the client; the client acquires the user characteristics of the target user according to the acquisition request and returns the user characteristics of the target user to the server, so that the server acquires the user characteristics of the target user.
As another example of possible implementation manners, the user characteristics of the target user may be stored in the server, and when receiving a video downloading request sent by the client, the server may obtain the user characteristics of the target user corresponding to the client from the storage module. It is to be understood that the user characteristics of the target user can also be obtained by other obtaining manners, and the disclosure is not limited in detail herein.
In the embodiment of the present disclosure, the user characteristics of the target user may include, but are not limited to, one or more of device information of a terminal device held by the target user, network information of the held terminal device, video definition requirements of the target user, a user representation of the target user, and the like. It is to be understood that the target user can be understood as a short video content consuming side user who wants to view a short video using a client.
In step S303, the user characteristics of the target user are input into the first prediction model, and a first predicted value of the stay time of the target user on the client is obtained.
In this step, the server may input the obtained user characteristics of the target user into a first pre-trained prediction model to predict the stay time, so as to obtain a first predicted value of the stay time of the target user on the client. The stay duration may be understood as a duration for the user to stay the next day and let the user continue to use the client again after the user watches the sample video with the highest definition as the first definition through the client.
In the embodiment of the present disclosure, the first prediction model learns the mapping relationship between the staying time of the experimental group sample users on the client after watching the sample video with the highest definition as the first definition and the user characteristics of the experimental group sample users. In some embodiments of the present disclosure, the first predictive model is obtained by training: acquiring user characteristics and sample labels of sample users in an experimental group; the sample label is used for indicating the stay time of the sample users in the experimental group on the client after watching the sample video with the highest definition as the first definition; inputting the user characteristics of the experimental group sample users into a neural network model for prediction to obtain a predicted value of the stay time of the experimental group sample users on a client after watching a sample video with the highest definition as a first definition; calculating a loss value between a sample label and a predicted value of the sample user of the experimental group according to a preset algorithm; and training the neural network model according to the loss value and a preset objective function, and obtaining model parameters to generate a first prediction model.
For example, when an experimental group sample user uses a client, a server may obtain user characteristics of the experimental group sample user in real time, and tag the user characteristics of the experimental group sample user to obtain a sample tag of the experimental group sample user, where the sample tag indicates a time length of stay on the client after the experimental group sample user watches a sample video with a highest definition being a first definition, and the time length of stay refers to a time length of stay when the experimental group sample user continues to use the client again the next day after watching the sample video with the highest definition being the first definition. Inputting the user characteristics of the experimental group sample users into the neural network model for prediction, obtaining a predicted value of the stay time of the experimental group sample users on a client after watching a sample video with the highest definition as the first definition, calculating a loss value between a sample label and the predicted value according to a preset loss function, training the neural network model according to the loss value and a preset target function so as to adjust and optimize the model parameters of the neural network model, and generating the first prediction model according to the adjusted and optimized model parameters. For example, machine learning modeling is performed by a neural network model using user characteristics of sample users in an experimental group and sample labels, and the model obtained by learning is used as the first prediction model.
In the embodiment of the present disclosure, the neural network model may be an XGBoost algorithm, or the neural network model may also be another neural network model, such as a linear regression model, a GBDT (gradient boosting decision tree), and the like, which is not specifically limited by the present disclosure.
In step S304, the user characteristics of the target user are input into the second prediction model, and a second predicted value of the stay time of the target user on the client is obtained.
In this step, the server may input the obtained user characteristics of the target user into a second pre-trained prediction model to predict the stay time, so as to obtain a second predicted value of the stay time of the target user on the client. The stay time duration may be understood as a time duration for a user to stay and continue to use the client again after the user watches the sample video with the highest definition as the second definition through the client.
In the embodiment of the present disclosure, the second prediction model has learned a mapping relationship between a staying time length on the client after the control group sample user watches the sample video with the highest definition as the second definition and a user characteristic of the control group sample user. In some embodiments of the present disclosure, the second predictive model is obtained by training: acquiring user characteristics and sample labels of comparison group sample users; the sample label is used for indicating the stay time of a control group sample user on a client after watching a sample video with the highest definition as the second definition; inputting the user characteristics of the comparison group sample users into a neural network model for prediction to obtain a predicted value of the stay time of the comparison group sample users on a client after watching the sample video with the highest definition as the second definition; calculating a loss value between the sample label of the comparison group sample user and the predicted value according to a preset algorithm; and training the neural network model according to the loss value and a preset target function, and obtaining model parameters to generate a second prediction model.
It should be noted that the control group sample user used for training the second prediction model may be the same as the experimental group sample user used for training the first prediction model, and the differences are as follows: when a first prediction model is trained, a sample video with the highest definition as first definition is sent to an experimental group sample user, and a sample label of the experimental group sample user is used for indicating the stay time of the experimental group sample user on a client after watching the sample video with the highest definition as the first definition; and when the second prediction model is trained, the sample video with the highest definition as the second definition is sent to the comparison group sample user, and the sample label of the comparison group sample user is used for indicating the stay time of the comparison group sample user on the client after watching the sample video with the highest definition as the second definition. Wherein the first resolution is higher than the second resolution. It is to be understood that the content of the sample video with the highest definition being the second definition may be the same as the content of the sample video with the highest definition being the first definition.
For example, when the comparison group sample user uses the client, the server may obtain the user characteristics of the comparison group sample user in real time, and tag the user characteristics of the comparison group sample user to obtain a sample tag of the comparison group sample user, where the sample tag indicates a time length of the comparison group sample user staying on the client after watching a sample video with the highest definition being the second definition, and the time length of the comparison group sample user staying the next day after watching the sample video with the highest definition being the second definition and allowing the comparison group sample user to continue to use the client again. Inputting the user characteristics of the comparison group sample users into the neural network model for prediction, obtaining a predicted value of the stay time of the comparison group sample users on a client after watching a sample video with the highest definition as the second definition, calculating a loss value between a sample label and the predicted value according to a preset loss function, training the neural network model according to the loss value and a preset target function so as to adjust and optimize the model parameters of the neural network model, and generating the second prediction model according to the adjusted and optimized model parameters. For example, machine learning modeling is performed by a neural network model using the user characteristics of the control group sample users and the sample labels, and the model obtained by learning is used as the second prediction model.
In the embodiment of the present disclosure, the neural network model may be an XGBoost algorithm, or the neural network model may also be another neural network model, such as GBDT, which is not specifically limited in the present disclosure.
It should be noted that the execution sequence of step S303 and step S304 may be that step S303 is executed first and then step S304 is executed, or step S304 is executed first and then step S303 is executed, or step S303 and step S304 are executed simultaneously.
In step S305, the highest definition at which the client can present the video is determined according to the first prediction value and the second prediction value, and the video is sent to the client according to the highest definition.
Optionally, the first predicted value, the second predicted value, and the target threshold may be utilized to determine a highest sharpness at which the client may render the video. In an embodiment of the disclosure, a first difference between the first predicted value and the second predicted value may be calculated, and one of the first sharpness and the second sharpness is selected as a highest sharpness at which the client may present the video according to the first difference and a target threshold. The first difference value is used for representing the influence of the target user on the stay time of the target user on the client after the target user watches the video with the first definition.
That is, after obtaining the first predicted value and the second predicted value of the target user, the second predicted value may be subtracted from the first predicted value to obtain a first difference value between the first predicted value and the second predicted value. The first difference and the target threshold may then be used to select one of the first and second resolutions as the highest resolution at which the client may present the video. For example, taking the first definition as 1080p and the second definition as 720p as an example, after obtaining the first predicted value and the second predicted value of the target user, the second predicted value can be subtracted from the first predicted value to obtain a first difference value between the first predicted value and the second predicted value. The first difference and the target threshold may then be used to select a definition from 1080p and 720p as the highest definition at which the client can present the video, for example, 720p.
It should be noted that there are many implementations of selecting one of the sharpness of the first sample video and the sharpness of the second sample video as the highest sharpness at which the client can present the video according to the first difference and the target threshold. As an example of one possible implementation, the first difference value is compared with the target threshold in size, and when the first difference value is greater than or equal to the target threshold, the first definition is taken as the highest definition at which the client can present the video; when the first difference is less than the target threshold, the second definition is taken as the highest definition at which the client can present the video. The target threshold may be a threshold determined if the definition of the transmitted video is improved and the service value is required to be guaranteed to be maximized. The target threshold may be directly preset based on a number of experiments, or the target threshold may be a reasonable and optimal threshold that can be explored from high to low through large-scale data on the line, and the determination of the target threshold may be as described in the following embodiments.
In the embodiment of the present disclosure, the service added value may be understood as a difference between the profit and the cost. For users sensitive to definition, the definition is improved to remarkably optimize the playing and impression experience of the users, the app retention time and the app use time of the client are improved, the increase of the retention rate of the next day and the increase of the app use time mean that the advertisement Revenue of the app is increased in proportion, the advertisement Revenue is income, and the revenues _ duration can be used for expressing. In addition, the increase of the definition means that the consumption of content distribution bandwidth increases, and the cost paid to the CDN manufacturer, which is the cost, can be expressed as cost _ bandwidth, also increases. It can be seen that the above mentioned value of service (Return On Investment, abbreviated as ROI) = revenuejduration-cost _ bandwidth.
In this step, after determining that the client can present the highest definition of the video, the video may be sent to the client according to the highest definition.
As an example, a version corresponding to the highest definition of the video may be sent to a client; or sending at least one definition version in the versions with the definition of the video being less than or equal to the definition corresponding to the highest definition to the client. For example, assuming that it is determined that the highest definition of the video that the client can present is 720p, a version corresponding to the highest definition of the video 720p may be pushed to the target user; or pushing the version corresponding to the 720p version and/or the version corresponding to the 540p version of the video to the target user.
For example, suppose a short video content production side user produces a 1080p resolution short video work and uploads it to the server. The server can transcode the short video works into definition versions with three different gears, such as 1080p definition version, 720p definition version and 540p definition version, and the short video service platform more hopes to provide videos with the highest definition for users with enough model performance so as to improve the playing experience of the users and ensure the maximization of a service added value ROI. The server can pre-establish a first prediction model and a second prediction model, wherein the first prediction model learns the mapping relation between the stay time of an experimental group sample user on the client after watching a sample video with the definition of 1080p and the user characteristics of the experimental group sample user; and the second prediction model learns the mapping relation between the stay time of the contrast group sample user on the client after watching the sample video with the definition of 720p and the user characteristics of the contrast group sample user. When a short video content consuming side user wants to view and browse a short video on a client, the server can receive a video playing request of the target user and acquire the user characteristics of the target user. And then, the server inputs the user characteristics into the first prediction model and the second prediction model respectively to obtain a first prediction value and a second prediction value of the stay time of the target user on the client. Then, one definition is selected from 1080p and 720p according to the first predicted value and the second predicted value to serve as the highest definition of the video that can be presented by the client, and if the definition 1080p is the highest definition of the video that can be presented by the client, a 1080p definition version of the video can be pushed to the target user, or at least one definition version of versions of the video with the definition less than or equal to 1080p is pushed to the target user, for example, the 1080p version, the 720p version and the 540p version are all pushed to the target user.
According to the video pushing method of the embodiment of the disclosure, a video downloading request sent by a client is received, user characteristics of a user are obtained according to the video downloading request, the user characteristics are respectively input into a first prediction model and a second prediction model, a first prediction value and a second prediction value of the stay time of the user on the client are obtained, the highest definition of the video which can be presented by the client is determined according to the first prediction value and the second prediction value, and the video is sent to the client according to the highest definition. Therefore, the embodiment of the disclosure recommends videos conforming to the user characteristics for different users under the condition of comprehensively considering the user characteristics so as to realize the delivery of self-adaptive video definition versions, reasonably promotes the definition of a short video service platform, and ensures the maximization of a service added value ROI while promoting the playing experience of the users, thereby solving the problems that the pushed video definition does not conform to the actual conditions of the users, the pushing effect is poor and the like in the related technology.
The method can help users with low mobile phone screen resolution, even if the users are issued works with resolution higher than that of the mobile phone, the users on the short video content consumption side cannot perceive the improvement of the impression of the definition, and more bandwidth is consumed to download the high-definition works. For some low-end models, the decoding rendering capability of the mobile phone is not enough to support the mobile phone to play high-resolution and high-frame-rate videos, and when the decoding rendering speed is lower than the playing speed, frame loss in playing can be caused, and the mobile phone is also a pause playing experience for users. Therefore, for a user with low screen resolution and/or poor model performance of the mobile phone, the corresponding highest definition version can be pushed to the user according to the screen resolution and/or the model performance. And for users with the mobile phone screen resolution and/or the model performance meeting the requirements, determining the highest definition of the video which can be presented by the client by using the user characteristics, and then sending the video to the client according to the highest definition. Specifically, in some embodiments of the present disclosure, as shown in fig. 4, the video pushing method may include:
in step S401, a video download request sent by a client is received, where the video download request is sent by the client in response to a video play request of a target user.
In step S402, according to the video download request, the screen resolution and/or model performance of the terminal device held by the target user are obtained.
As an example, the video download request may carry a screen resolution and/or a model capability of the terminal device held by the user. In this step, the screen resolution and/or model capability of the terminal device held by the target user may be obtained from the video download request.
In step S403, when the screen resolution is greater than or equal to the target screen resolution and/or the model performance meets the preset condition, the user characteristics of the target user are obtained according to the video download request.
In step S404, the user characteristics of the target user are input into the first prediction model, and a first predicted value of the stay time of the target user on the client is obtained.
In step S405, the user characteristics of the target user are input into the second prediction model, and a second predicted value of the stay time of the target user on the client is obtained.
In step S406, the highest definition of the video that can be presented by the client is determined according to the first prediction value and the second prediction value, and the video is sent to the client according to the highest definition.
In step S407, when the screen resolution is less than the target screen resolution and/or the model performance does not satisfy the preset condition, the highest definition corresponding to the screen resolution and/or the model performance is determined.
That is to say, the terminal device held by the user cannot present a video with higher definition, and at this time, the highest definition corresponding to the screen resolution and/or the model performance may be determined according to the screen resolution and/or the model performance of the held terminal device.
In step S408, the video is transmitted to the client according to the highest definition corresponding to the screen resolution and/or model capability.
That is, the video with the highest possible presentation definition of the terminal device held by the user is pushed to the user.
According to the video pushing method disclosed by the embodiment of the disclosure, when a video downloading request sent by a client is received, the screen resolution and/or the model performance of the terminal device held by a target user can be obtained according to the video downloading request, so that whether the terminal device held by the target user can present a video with a higher definition version or not is judged according to the screen resolution and/or the model performance; when the screen resolution is greater than or equal to the target screen resolution and/or the model performance meets the preset conditions, the terminal device held by the target user can present a video with a higher definition version, the highest definition of the video which can be presented by the client side can be determined by utilizing the user characteristics, and then the video is sent to the client side according to the highest definition; when the screen resolution is smaller than the target screen resolution and/or the model performance does not meet the preset conditions, it is indicated that the terminal device held by the target user cannot present the video of the higher definition version, and at this time, the corresponding highest definition version can be pushed to the user according to the screen resolution and/or the model performance. Therefore, under the condition of comprehensively considering the user characteristics, the screen resolution and the model performance of the terminal equipment, the video which is consistent with the terminal video of the user is recommended for different users, the self-adaptive video definition version is issued, the definition of the short video service platform is reasonably improved, and the service value added maximization is guaranteed while the playing experience of the user is improved.
It should be noted that the above target threshold can be a reasonable and optimal threshold that can be explored from high to low through large-scale data on the line. Optionally, in some embodiments of the present disclosure, as shown in fig. 5, the target threshold may be obtained by:
in step S501, user characteristics of a sample user are acquired.
In step S502, user characteristics of the sample user are input to the first prediction model and the second prediction model, respectively, and a first prediction value and a second prediction value of a stay time of the sample user on the client are obtained.
In step S503, the first prediction value and the second prediction value of the sample user are subjected to difference calculation to obtain a second difference value.
In the embodiment of the present disclosure, the second difference is used to represent an influence of a sample user watching a sample video with the highest definition being the first definition on a time length that the sample user stays on the client.
In step S504, a target sample user whose second difference is greater than the threshold is obtained from the sample users.
In an embodiment of the present disclosure, the threshold may be preset, for example, an initial value of the preset threshold may be 1.
In step S505, the service added value generated after the target sample user watches the sample video with the highest definition being the first definition is obtained.
Optionally, after a target sample user whose second difference is greater than the threshold is obtained from the sample users, the first sample video may be sent to a terminal device held by the target sample user. The target sample user views the first sample video through a client on the terminal device. The server can obtain the stay time of a target sample user staying on the client after watching the sample video with the highest definition as the first definition, convert the stay time into corresponding income according to the stay time, obtain the bandwidth consumption cost brought by distributing the sample video with the highest definition as the first definition to the target sample user, and calculate the service increment generated after the target sample user watches the sample video with the highest definition as the first definition by utilizing the income and the bandwidth consumption cost so as to update the threshold value according to the generated service increment in the subsequent process.
In step S506, it is determined whether the generated service added value is greater than the target value, if the generated service added value is greater than the target value, step S507 is performed, and if the generated service added value is less than or equal to the target value, step S508 is performed.
In step S507, the threshold is updated, the updated threshold is used as a new threshold, and the step S504 is returned to, that is, the step of obtaining the target sample user with the second difference value larger than the threshold from the sample users is returned to.
In the embodiment of the present disclosure, when it is determined that the generated service increment is greater than the target value, the threshold may be updated, for example, the threshold may be adjusted downward, that is, the threshold is decreased, where a base number of each downward adjustment may be a fixed value, that is, when the threshold is adjusted downward, a fixed value may be subtracted from a current threshold, and an obtained difference value is used as a new threshold. As an example, the fixed value may be 0.01.
It should be noted that, in the embodiment of the present disclosure, the target value may be understood as a service added value obtained by using a target sample user whose second difference value is greater than a threshold value, which is obtained from a sample user last time, and the service added value may be understood as a service added value generated after the target sample user selected last time views the sample video with the highest definition being the first definition.
In step S508, the most recently updated threshold is set as the target threshold.
For example, user characteristics of the sample user are obtained, the user characteristics of the sample user are respectively input into the first prediction model and the second prediction model, and a first prediction value and a second prediction value of the stay time of the sample user on the client side are obtained. Then, the first prediction value and the second prediction value of the sample user may be subjected to difference value calculation to obtain a second difference value. Then, a reasonable and optimal threshold _ score can be explored from high to low through these second differences as the target threshold. As an example, the searching process can be represented by a code as shown in fig. 6, wherein threshold _ score represents a threshold, an initial value of the threshold is set to 1, roi \/max represents a maximum value of the business increment, roi _ tmp represents a temporary value of the business increment, initial values of roi _ max and roi _ tmp are both 0, and the target threshold is determined by using If condition statement. That is, after obtaining the second difference value of the sample user, a sample video with the highest definition as the first definition may be issued to a target sample user group with the second difference value larger than the current threshold, a service added value roi _ tmp generated after the target sample user watches the sample video with the highest definition as the first definition at this time is calculated, whether the service added value roi _ tmp at this time is larger than the maximum value roi _ max of the service added value is determined, if so, the service added value roi _ tmp is assigned to the maximum value roi _ max of the service added value, and the threshold at this time is updated by lowering, for example, a fixed value (e.g., 0.01) is subtracted from the threshold at this time, the obtained difference value is used as a new threshold, at this time, it is determined whether the new threshold is larger than 0, if so, the step of executing the sample video with the highest definition as the first definition to the target sample user group with the second difference value larger than the current threshold is returned, until the service added value generated after the target sample user group watches the sample video with the highest definition as the first definition at this time is calculated, and the target sample user group with the highest definition as the latest value is updated, and the target value of the service added value, and the service added value is equal to or smaller than the target value of the latest value.
Therefore, according to the embodiment of the disclosure, a reasonable and optimal threshold value can be searched from high to low through online large-scale data, the threshold value is used as a target threshold value, and the target threshold value is used as a judgment standard for judging whether the highest definition version of the video can be issued to the target user, so that the definition of the video issued by the short video service platform can be reasonably improved, the service value added maximization is ensured, and the playing experience of the user can be improved.
It is worth noting that the method mainly considers that in the face of videos with different definitions generated by short video content production side users, adaptive video definition versions are issued according to different model support degrees, network support degrees and definition requirements of different short video content consumption side users. For example, assume that the server rolls out three-file resolution for a work: 1080p, 720p and 540p are examples.
First, the dimension of considering user model, sharpness requirements, and cost consumption needs to be analyzed, where:
1. the model may include screen size and handset capabilities. 1) Screen size: for a user with a relatively low mobile phone screen resolution, even if a work higher than the mobile phone resolution is delivered to the user, the user at the consuming side of the short video content cannot perceive the appearance improvement of the definition, and consumes more bandwidth to download the high-definition work, so that all devices only deliver work shifts less than or equal to the mobile phone screen resolution (for example, the video does not deliver 1080p and 720p shifts); 2) The mobile phone performance is as follows: for some low-end models, the decoding rendering capability of the mobile phone is not enough to support the mobile phone to play high-resolution and high-frame-rate videos, when the decoding rendering speed is slower than the playing speed, frame loss in playing can be caused, and the mobile phone is also a cardon playing experience for users, so that the distribution gear decision module can collect mapping of 'model, playing definition gear' → 'frame loss rate' on line in real time, and for models with a frame loss rate greater than a Threshold value Threshold _ dropframe rate (such as 6%), gears with corresponding resolutions (such as 1080p gears are not issued by the video above).
2. Issuing personalized definition gears: for most users, the current mobile phone can smoothly support the playing of 1080p works. But in terms of enhanced-definition service added value (roi):
1) And (4) yield: for users sensitive to definition, the playing experience of the users can be obviously optimized by improving the definition, the storage of the apps of the client in the next day and the use duration of the apps are improved, and the increase of the storage rate of the next day and the increase of the use duration of the apps mean that the advertisement revenue and the like of the apps are increased in proportion;
2) Cost: increasing the clarity means increased consumption of content delivery bandwidth and a concomitant increase in the cost to the CDN vendor.
Therefore, in order to at least level the ROI, the present disclosure obtains a user group sensitive to the visibility through a time series parallel dual-modeling by designing an ab experiment, and issues a high definition gear to a user with the highest requirement for the visibility, so as to ensure the maximization of the ROI (derived _ duration-cost _ bandwidth), and the specific process may be as follows:
1. an ab experiment was performed: a contrast group (i.e., a base group, which may be understood as a contrast group sample user group described in the embodiment of the present application) is issued with a highest definition gear of 720p, and an experimental group (i.e., an exp group, which may be understood as an experimental group sample user group described in the embodiment of the present application) is issued with a highest definition gear of 1080p;
2. performing machine learning modeling by using the XGboost algorithm according to the retention time of the user of the Base group (namely the retention time of the user on the client after watching the definition 720p version video) the next day, and generating a Model _ Base, namely the second prediction Model;
3. the retention prediction value of each user in the exp group and the Base group is obtained through a Model _ Base Model: score _ base;
4. performing machine learning modeling by using the XGboost algorithm to the user retention of the Exp group the next day, generating a Model _ Exp, namely the first prediction Model, and calculating the retention prediction value of each user in the Exp group and the base group by using the Model _ Exp Model: score _ exp;
5. score _ diff = score _ exp-score _ base for each user, where score _ diff may be understood as the effect of this user's viewing of high definition video on its persistence; wherein if score _ diff >0, it means that watching high definition video has a positive impact on the persistence; if score _ diff <0, then it means that watching high definition video has a negative impact on retention;
6. using the above steps S501-S508, a target threshold _ score is explored from high to low through the large scale data on the line.
Finally, for users with enough Model performance, when a user video playing request arrives, the server can predict the forward probability score _ diff of the users based on the existing Model _ Base Model and Model _ Exp Model by acquiring user characteristics in real time, and for the users who are score _ diff > = threshold _ score, the users issue the video with the definition of 1080p for the users who are score _ diff > = threshold _ score, and for the users who are score _ diff < threshold _ score, the users issue the video with the definition of 720p.
Therefore, the user characteristics are considered, and personalized definition is improved; in addition, the cost is considered, the definition promotion with the optimal service added value is carried out, namely the personalized consumption side definition promotion scheme is designed by balancing two factors of different users with different requirements on definition and different bandwidth costs generated by different definition gears around user characteristics, the self-adaptive video definition version issuing is realized, the definition of a short video service platform is reasonably promoted, and the service added value ROI maximization is guaranteed while the playing experience of the users is promoted.
Fig. 7 is a block diagram illustrating a video push device according to an example embodiment. As shown in fig. 7, the apparatus includes a receiving module 790, a first obtaining module 710, a second obtaining module 720, a third obtaining module 730, a determining module 740, and a pushing module 750.
Wherein, the receiving module 790 is configured to receive a video download request sent by a client, wherein the video download request is sent by the client in response to a video playing request of a target user.
The first obtaining module 710 is configured to obtain the user characteristics of the target user according to the video downloading request. In the embodiment of the present disclosure, the user characteristics of the target user may include, but are not limited to, one or more of device information of a terminal device held by the target user, network information of the held terminal device, video definition requirements of the target user, a user representation of the target user, and the like.
The second obtaining module 720 is configured to input the user characteristics of the target user into the first prediction model, and obtain a first predicted value of the stay time of the target user on the client; and the first prediction model learns the mapping relation between the stay time of the experimental group sample user on the client after watching the sample video with the highest definition as the first definition and the user characteristics of the experimental group sample user.
The third obtaining module 730 is configured to input the user characteristics of the target user into the second prediction model, and obtain a second predicted value of the stay time of the target user on the client; the second prediction model learns the mapping relation between the stay time of a contrast group sample user on the client after watching a sample video with the highest definition as the second definition and the user characteristics of the contrast group sample user; wherein the first definition is higher than the second definition.
The determination module 740 is configured to determine a highest sharpness at which the client may render the video based on the first prediction value and the second prediction value.
The push module 750 is configured to send the video to the client according to the highest definition. As an example, the pushing module 750 sends the version corresponding to the highest definition of the video to the client; or sending at least one definition version of the versions with the definition less than or equal to the definition corresponding to the highest definition to the client.
In some embodiments of the present disclosure, as shown in fig. 8, the determining module 740 may include: a calculation unit 741, a determination unit 742, and an acquisition unit 743. Wherein the calculation unit 741 is configured to calculate a first difference between the first prediction value and the second prediction value, wherein; the first difference is used for representing the influence of the target user on the stay time of the target user on the client after the target user watches the video with the first definition; the determining unit 742 is configured to select one of the first definition and the second definition as a highest definition at which the client can present the video according to the first difference and a target threshold.
In an embodiment of the present disclosure, the determining unit 742 is specifically configured to: comparing the first difference value with the target threshold value in size; when the first difference is greater than or equal to the target threshold, taking the first definition as the highest definition at which the client can present the video; when the first difference is less than the target threshold, the second definition is taken as the highest definition at which the client can present the video.
In some embodiments of the present disclosure, as shown in fig. 9, the video pushing apparatus may further include a target threshold obtaining module 760, which may be configured to determine the target threshold in advance; in the embodiment of the present disclosure, the target threshold obtaining module 760 is specifically configured to: acquiring user characteristics of sample users; inputting the user characteristics of the sample user into the first prediction model and the second prediction model respectively to obtain a first prediction value and a second prediction value of the stay time of the sample user on the client; performing difference value calculation on the first predicted value and the second predicted value of the sample user to obtain a second difference value, wherein the second difference value is used for representing the influence on the stay time of the sample user on the client after the sample user watches the sample video with the highest definition as the first definition; obtaining target sample users of which the second difference is larger than a threshold value from the sample users; acquiring a service added value generated after a target sample user watches the sample video with the highest definition as the first definition; updating the threshold value according to the service increment value, and taking the updated threshold value as a new threshold value; and returning to the step of obtaining the target sample user with the second difference value larger than the threshold value from the sample users until the generated service increment is smaller than or equal to the target value, and taking the latest updated threshold value as the target threshold value.
In some embodiments of the present disclosure, as shown in fig. 10, the video pushing apparatus may further include a first training module 770 configured to pre-train the first prediction model. Among other things, in an embodiment of the present disclosure, the first training module 770 is specifically configured to: acquiring user characteristics and sample labels of sample users in an experimental group; the sample label is used for indicating the stay time of the sample users in the experimental group on the client after watching the sample video with the highest definition as the first definition; inputting the user characteristics of the experimental group sample users into a neural network model for prediction to obtain a predicted value of the stay time of the experimental group sample users on a client after watching a sample video with the highest definition as a first definition; calculating a loss value between a sample label and a predicted value of the sample user of the experimental group according to a preset algorithm; and training the neural network model according to the loss value and a preset objective function, and obtaining model parameters to generate a first prediction model.
In some embodiments of the present disclosure, as shown in fig. 11, the video push apparatus may further include a second training module 780 configured to pre-train a second prediction model. In an embodiment of the present disclosure, the second training module 780 is specifically configured to: acquiring user characteristics and sample labels of comparison group sample users; the sample label is used for indicating the stay time of a control group sample user on a client after watching a sample video with the highest definition as the second definition; inputting the user characteristics of the comparison group sample users into a neural network model for prediction to obtain a predicted value of the stay time of the comparison group sample users on a client after watching the sample video with the highest definition as the second definition; calculating a loss value between the sample label of the comparison group sample user and the predicted value according to a preset algorithm; and training the neural network model according to the loss value and a preset target function, and obtaining model parameters to generate a second prediction model.
In some embodiments of the present disclosure, the video pushing apparatus may further include a fourth obtaining module. The fourth obtaining module is configured to obtain the screen resolution and/or the model performance of the terminal device held by the target user according to the video downloading request; the first obtaining module is further configured to execute the step of obtaining the user characteristics of the target user according to the video downloading request when the screen resolution is greater than or equal to the target screen resolution and/or the model performance meets a preset condition.
In the embodiment of the disclosure, the determining module is further configured to determine the highest definition corresponding to the screen resolution and/or the model performance when the screen resolution is smaller than the target screen resolution and/or the model performance does not meet the preset condition; and the pushing module is also configured to send the video to the client according to the highest definition corresponding to the screen resolution and/or the model performance.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
According to the video pushing device disclosed by the embodiment of the disclosure, a video downloading request sent by a client is received, user characteristics of a user are obtained according to the video downloading request, the user characteristics are respectively input into a first prediction model and a second prediction model, a first prediction value and a second prediction value of the stay time of the user on the client are obtained, the highest definition of the video which can be presented by the client is determined according to the first prediction value and the second prediction value, and the video is sent to the client according to the highest definition. Therefore, the embodiment of the disclosure recommends videos conforming to the user characteristics for different users under the condition of comprehensively considering the user characteristics so as to realize the delivery of self-adaptive video definition versions, reasonably promotes the definition of a short video service platform, and ensures the maximization of a service added value ROI while promoting the playing experience of the users, thereby solving the problems that the pushed video definition does not conform to the actual conditions of the users, the pushing effect is poor and the like in the related technology.
Fig. 12 is a block diagram illustrating a server 200 according to an example embodiment. As shown in fig. 12, the server 200 may include:
a memory 210 and a processor 220, a bus 230 connecting different components (including the memory 210 and the processor 220), the memory 210 storing instructions executable by the processor 220; wherein the processor 220 is configured to execute the instructions to implement the video pushing method according to the embodiment of the disclosure.
Bus 230 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Server 200 typically includes a variety of electronic device readable media. Such media may be any available media that is accessible by server 200 and includes both volatile and nonvolatile media, removable and non-removable media. Memory 210 may also include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM) 240 and/or cache memory 250. The server 200 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 260 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 12, commonly referred to as a "hard drive"). Although not shown in FIG. 12, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 230 by one or more data media interfaces. Memory 210 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the disclosure.
A program/utility 280 having a set (at least one) of program modules 270, including but not limited to an operating system, one or more application programs, other program modules, and program data, each of which or some combination thereof may comprise an implementation of a network environment, may be stored in, for example, the memory 210. The program modules 270 generally perform the functions and/or methodologies of the embodiments described in this disclosure.
The server 200 may also communicate with one or more external devices 290 (e.g., keyboard, pointing device, display 291, etc.), with one or more devices that enable a user to interact with the server 200, and/or with any devices (e.g., network card, modem, etc.) that enable the server 200 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 292. Also, server 200 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet) via network adapter 293. As shown, network adapter 293 communicates with the other modules of server 200 via bus 230. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the server 200, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processor 220 executes various functional applications and data processing by executing programs stored in the memory 210.
It should be noted that, for the implementation process and the technical principle of the server in this embodiment, reference is made to the foregoing explanation of the video pushing method according to the embodiment of the present disclosure, and details are not described here again.
In order to implement the above embodiments, the present disclosure also provides a storage medium.
Wherein the instructions in the storage medium, when executed by the processor of the server, enable the server to perform the video push method as previously described.
To implement the above embodiments, the present disclosure also provides a computer program product, wherein instructions of the computer program product, when executed by a processor, enable a server to execute the video push method as described above.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (18)

1. A video push method, comprising:
receiving a video downloading request sent by a client, wherein the video downloading request is sent by the client in response to a video playing request of a target user;
acquiring the user characteristics of the target user according to the video downloading request;
inputting the user characteristics of the target user into a first prediction model to obtain a first prediction value of the stay time of the target user on the client; the first prediction model learns the mapping relation between the stay time of an experimental group sample user on the client after watching a sample video with the highest definition as the first definition and the user characteristics of the experimental group sample user;
inputting the user characteristics of the target user into a second prediction model to obtain a second prediction value of the stay time of the target user on the client; the second prediction model learns the mapping relation between the stay time of a contrast group sample user on the client after watching a sample video with the highest definition as the second definition and the user characteristics of the contrast group sample user; wherein the first definition is higher than the second definition;
calculating a first difference between the first predicted value and the second predicted value; the first difference value is used for representing the influence of the target user on the stay time of the target user on the client after the target user watches the video with the first definition;
comparing the first difference value with a target threshold value in size;
when the first difference is greater than or equal to the target threshold, taking the first definition as the highest definition at which the client can present the video; when the first difference is less than the target threshold, taking the second definition as the highest definition at which the client can present the video;
and sending the video to the client according to the highest definition.
2. The video pushing method according to claim 1, wherein before said obtaining the user characteristics of the target user according to the video downloading request, the method further comprises:
acquiring the screen resolution and/or the model performance of the terminal equipment held by the target user according to the video downloading request;
and when the screen resolution is greater than or equal to a target screen resolution and/or the model performance meets a preset condition, executing the step of acquiring the user characteristics of the target user according to the video downloading request.
3. The video push method according to claim 2, further comprising:
when the screen resolution is smaller than the target screen resolution and/or the model performance does not meet the preset condition, determining the highest definition corresponding to the screen resolution and/or the model performance;
and sending the video to the client according to the highest definition corresponding to the screen resolution and/or the model performance.
4. The video push method according to claim 1, characterized in that the target threshold is obtained by:
acquiring user characteristics of sample users;
inputting the user characteristics of the sample user into the first prediction model and the second prediction model respectively to obtain a first prediction value and a second prediction value of the stay time of the sample user on the client;
performing difference value calculation on the first predicted value and the second predicted value of the sample user to obtain a second difference value, wherein the second difference value is used for representing the influence of the sample user on the stay time of the sample user on the client after the sample user watches the sample video with the highest definition being the first definition;
obtaining target sample users of which the second difference is larger than a threshold value from the sample users;
acquiring a service added value generated after a target sample user watches the sample video with the highest definition as the first definition;
updating the threshold value according to the service increment value, and taking the updated threshold value as a new threshold value;
and returning to the step of obtaining the target sample user with the second difference value larger than the threshold value from the sample users until the generated service increment is smaller than or equal to the target value, and taking the latest updated threshold value as the target threshold value.
5. The video pushing method according to claim 1, wherein said sending the video to the client according to the highest definition comprises:
sending the version corresponding to the highest definition of the video to the client; or,
and sending at least one definition version in the versions with the definition of the video being less than or equal to the definition corresponding to the highest definition to the client.
6. The video push method according to claim 1, characterized in that the first predictive model is obtained by training:
acquiring user characteristics and sample labels of sample users in an experimental group; the sample label is used for indicating the stay time of the experimental group sample user on the client after watching the sample video with the highest definition being the first definition;
inputting the user characteristics of the experimental group sample users into a neural network model for prediction to obtain a predicted value of the stay time of the experimental group sample users on the client after watching the sample video with the highest definition as the first definition;
calculating a loss value between the sample label of the experimental group sample user and the predicted value according to a preset algorithm;
and training the neural network model according to the loss value and a preset objective function, and obtaining model parameters to generate the first prediction model.
7. The video push method according to claim 1, characterized in that the second predictive model is obtained by training:
acquiring user characteristics and sample labels of comparison group sample users; the sample label is used for indicating the stay time of the control group sample users on the client after watching the sample video with the highest definition being the second definition;
inputting the user characteristics of the comparison group sample users into a neural network model for prediction to obtain a predicted value of the stay time of the comparison group sample users on the client after watching the sample video with the highest definition as the first definition;
calculating a loss value between the sample label of the comparison group sample user and the predicted value according to a preset algorithm;
and training the neural network model according to the loss value and a preset target function, and obtaining model parameters to generate the second prediction model.
8. The video push method according to any of claims 1 to 7, characterized in that the user characteristics comprise:
device information of a held terminal device, network information of the held terminal device, video sharpness requirements, user portrayal, or a combination thereof.
9. A video push apparatus, comprising:
the device comprises a receiving module, a processing module and a display module, wherein the receiving module is configured to receive a video downloading request sent by a client, and the video downloading request is sent by the client in response to a video playing request of a target user;
a first obtaining module configured to obtain a user characteristic of the target user according to the video downloading request;
the second obtaining module is configured to input the user characteristics of the target user into a first prediction model, and obtain a first prediction value of the stay time of the target user on the client; the first prediction model learns the mapping relation between the stay time of an experimental group sample user on the client after watching a sample video with the highest definition as the first definition and the user characteristics of the experimental group sample user;
a third obtaining module, configured to input the user characteristics of the target user into a second prediction model, and obtain a second prediction value of a stay time of the target user on the client; the second prediction model learns the mapping relation between the stay time of a contrast group sample user on the client after watching a sample video with the highest definition as the second definition and the user characteristics of the contrast group sample user; wherein the first definition is higher than the second definition;
a determination module configured to calculate a first difference between the first predicted value and the second predicted value; the first difference value is used for representing the influence of the target user on the stay time of the target user on the client after the target user watches the video with the first definition; comparing the first difference value with a target threshold value in size; when the first difference is greater than or equal to the target threshold, taking the first definition as the highest definition at which the client can present the video; when the first difference is less than the target threshold, taking the second definition as the highest definition at which the client can present the video; and
a push module configured to send the video to the client according to the highest definition.
10. The video push apparatus according to claim 9, further comprising:
the fourth obtaining module is configured to obtain the screen resolution and/or the model performance of the terminal device held by the target user according to the video downloading request;
the first obtaining module is further configured to execute the step of obtaining the user characteristics of the target user according to the video downloading request when the screen resolution is greater than or equal to a target screen resolution and/or the model performance meets a preset condition.
11. The video push apparatus of claim 10,
the determining module is further configured to determine a highest definition corresponding to the screen resolution and/or the model performance when the screen resolution is smaller than the target screen resolution and/or the model performance does not meet the preset condition;
the push module is further configured to send the video to the client according to a highest definition corresponding to the screen resolution and/or the model performance.
12. The video push apparatus according to claim 9, further comprising:
a target threshold acquisition module configured to predetermine the target threshold;
wherein the target threshold acquisition module is specifically configured to:
acquiring user characteristics of sample users;
inputting the user characteristics of the sample user into the first prediction model and the second prediction model respectively to obtain a first prediction value and a second prediction value of the stay time of the sample user on the client;
performing difference value calculation on the first predicted value and the second predicted value of the sample user to obtain a second difference value, wherein the second difference value is used for representing the influence of the sample user on the stay time of the sample user on the client after the sample user watches the sample video with the highest definition being the first definition;
obtaining target sample users of which the second difference is larger than a threshold value from the sample users;
acquiring a service added value generated after a target sample user watches the sample video with the highest definition as the first definition;
updating the threshold value according to the service increment value, and taking the updated threshold value as a new threshold value;
and returning to the step of obtaining the target sample user with the second difference value larger than the threshold value from the sample users until the generated service increment is smaller than or equal to the target value, and taking the latest updated threshold value as the target threshold value.
13. The video pushing device according to claim 9, wherein the pushing module is specifically configured to:
sending the version corresponding to the highest definition of the video to the client; or,
and sending at least one definition version in the versions with the definition of the video being less than or equal to the definition corresponding to the highest definition to the client.
14. The video push apparatus according to claim 9, further comprising:
a first training module configured to pre-train the first predictive model;
wherein the first training module is specifically configured to:
acquiring user characteristics and sample labels of sample users in an experimental group; the sample label is used for indicating the stay time of the experimental group sample user on the client after watching the sample video with the highest definition being the first definition;
inputting the user characteristics of the experimental group sample users into a neural network model for prediction to obtain a predicted value of the stay time of the experimental group sample users on the client after watching the sample video with the highest definition as the first definition;
calculating a loss value between the sample label of the experimental group sample user and the predicted value according to a preset algorithm;
and training the neural network model according to the loss value and a preset target function, and obtaining model parameters to generate the first prediction model.
15. The video push apparatus according to claim 9, further comprising:
a second training module configured to pre-train the second predictive model;
wherein the second training module is specifically configured to:
acquiring user characteristics and sample labels of sample users in a control group; the sample label is used for indicating the stay time of the control group sample users on the client after watching the sample video with the highest definition being the second definition;
inputting the user characteristics of the comparison group sample users into a neural network model for prediction to obtain a predicted value of the stay time of the comparison group sample users on the client after watching the sample video with the highest definition as the second definition;
calculating a loss value between the sample label of the comparison group sample user and the predicted value according to a preset algorithm;
and training the neural network model according to the loss value and a preset target function, and obtaining model parameters to generate the second prediction model.
16. The video push device according to any of claims 9 to 15, wherein the user features comprise:
device information of a held terminal device, network information of the held terminal device, video sharpness requirements, user portrayal, or a combination thereof.
17. A server, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the video push method of any of claims 1 to 8.
18. A storage medium in which instructions, when executed by a processor of a server, enable the server to perform a video push method as claimed in any one of claims 1 to 8.
CN202011045153.XA 2020-09-28 2020-09-28 Video pushing method and device, server and storage medium Active CN112333481B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011045153.XA CN112333481B (en) 2020-09-28 2020-09-28 Video pushing method and device, server and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011045153.XA CN112333481B (en) 2020-09-28 2020-09-28 Video pushing method and device, server and storage medium

Publications (2)

Publication Number Publication Date
CN112333481A CN112333481A (en) 2021-02-05
CN112333481B true CN112333481B (en) 2022-10-28

Family

ID=74304351

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011045153.XA Active CN112333481B (en) 2020-09-28 2020-09-28 Video pushing method and device, server and storage medium

Country Status (1)

Country Link
CN (1) CN112333481B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114025211A (en) * 2021-10-27 2022-02-08 福建野小兽健康科技有限公司 Video issuing method and system adaptive to user equipment
CN115314723B (en) * 2022-06-17 2023-12-12 百果园技术(新加坡)有限公司 Method, device, equipment and storage medium for transmitting initial gear video stream
CN116471323B (en) * 2023-06-19 2023-08-22 广推科技(北京)有限公司 Online crowd behavior prediction method and system based on time sequence characteristics

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102149005A (en) * 2011-04-29 2011-08-10 四川长虹电器股份有限公司 Self-adaptive method for controlling network video quality
CN105654446A (en) * 2016-02-02 2016-06-08 广东欧珀移动通信有限公司 Video definition adjustment method and device
CN105744357A (en) * 2016-02-29 2016-07-06 哈尔滨超凡视觉科技有限公司 Method for reducing network video bandwidth occupation based on online resolution improvement
CN110312148A (en) * 2019-07-15 2019-10-08 贵阳动视云科技有限公司 A kind of adaptive method of transmitting video data, device and medium
WO2019242424A1 (en) * 2018-06-20 2019-12-26 腾讯科技(深圳)有限公司 Video encoding/decoding method and apparatus, computer device, and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7274661B2 (en) * 2001-09-17 2007-09-25 Altera Corporation Flow control method for quality streaming of audio/video/media over packet networks

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102149005A (en) * 2011-04-29 2011-08-10 四川长虹电器股份有限公司 Self-adaptive method for controlling network video quality
CN105654446A (en) * 2016-02-02 2016-06-08 广东欧珀移动通信有限公司 Video definition adjustment method and device
CN105744357A (en) * 2016-02-29 2016-07-06 哈尔滨超凡视觉科技有限公司 Method for reducing network video bandwidth occupation based on online resolution improvement
WO2019242424A1 (en) * 2018-06-20 2019-12-26 腾讯科技(深圳)有限公司 Video encoding/decoding method and apparatus, computer device, and storage medium
CN110312148A (en) * 2019-07-15 2019-10-08 贵阳动视云科技有限公司 A kind of adaptive method of transmitting video data, device and medium

Also Published As

Publication number Publication date
CN112333481A (en) 2021-02-05

Similar Documents

Publication Publication Date Title
CN112333481B (en) Video pushing method and device, server and storage medium
US11651002B2 (en) Method for providing intelligent service, intelligent service system and intelligent terminal based on artificial intelligence
CN107578017B (en) Method and apparatus for generating image
US20200349606A1 (en) Method for modeling mobile advertisement consumption
CN113157906A (en) Recommendation information display method, device, equipment and storage medium
CN111831855B (en) Method, apparatus, electronic device, and medium for matching videos
CN112241327A (en) Shared information processing method and device, storage medium and electronic equipment
CN113170221A (en) Display method, device, terminal, server and storage medium of live broadcast interface
WO2022252513A1 (en) Target display method and apparatus
CN114245209B (en) Video resolution determination, model training and video coding method and device
CN110889725B (en) Online advertisement CTR estimation method, device, equipment and storage medium
CN113094141A (en) Page display method and device, electronic equipment and storage medium
US20090327164A1 (en) Load distributing method, computer product, and load distributing apparatus
US20230275948A1 (en) Dynamic user-device upscaling of media streams
US20220277342A1 (en) Method for modeling digital advertisement consumption
CN115209215A (en) Video processing method, device and equipment
CN111783731B (en) Method and device for extracting video features
CN112492399B (en) Information display method and device and electronic equipment
CN110781374A (en) User data matching method and device, electronic equipment and computer readable medium
CN112269942B (en) Method, device and system for recommending object and electronic equipment
CN111881353B (en) Method and device for pushing display resources, electronic equipment and storage medium
CN113239230A (en) Service recommendation method, behavior data increment prediction model generation method and device
CN114491093A (en) Multimedia resource recommendation and object representation network generation method and device
CN113641853A (en) Dynamic cover generation method, device, electronic equipment, medium and program product
CN113111243A (en) Display object sharing method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant