WO2018090545A1 - 融合时间因素的协同过滤方法、装置、服务器和存储介质 - Google Patents
融合时间因素的协同过滤方法、装置、服务器和存储介质 Download PDFInfo
- Publication number
- WO2018090545A1 WO2018090545A1 PCT/CN2017/079565 CN2017079565W WO2018090545A1 WO 2018090545 A1 WO2018090545 A1 WO 2018090545A1 CN 2017079565 W CN2017079565 W CN 2017079565W WO 2018090545 A1 WO2018090545 A1 WO 2018090545A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- time period
- model
- user preference
- smoothing
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/02—Knowledge representation; Symbolic representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0241—Advertisements
- G06Q30/0251—Targeted advertisements
- G06Q30/0269—Targeted advertisements based on user profile or attribute
- G06Q30/0271—Personalized advertisement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/02—Knowledge representation; Symbolic representation
- G06N5/022—Knowledge engineering; Knowledge acquisition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0201—Market modelling; Market analysis; Collecting market data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0201—Market modelling; Market analysis; Collecting market data
- G06Q30/0202—Market predictions or forecasting for commercial activities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0241—Advertisements
- G06Q30/0251—Targeted advertisements
- G06Q30/0255—Targeted advertisements based on user history
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0631—Item recommendations
Definitions
- the present invention relates to the field of computer technologies, and in particular, to a collaborative filtering method, apparatus, server, and storage medium that incorporate time factors.
- a collaborative filtering method, apparatus, server, and storage medium incorporating a time factor are provided.
- a collaborative filtering method that integrates time factors including:
- the training is performed by the collaborative filtering model, and the predicted values of the plurality of users to be predicted in the sparse matrix are calculated.
- a collaborative filtering device that integrates time factors including:
- a model building module for establishing an exponential smoothing model
- An obtaining module configured to acquire a time period for the exponential smoothing model, where the time period includes multiple time periods; acquiring a plurality of user identifiers and user preference values of the user identifiers for the specified products in the plurality of time periods;
- a smoothing module configured to perform an iterative calculation on the user preference level value by using the exponential smoothing model, to obtain a smoothing result corresponding to a time period
- a matrix generating module configured to generate a sparse matrix by using the user identifier and the smoothing result corresponding to the time period, where the sparse matrix includes a plurality of user preferences to be predicted;
- the obtaining module is further configured to acquire a collaborative filtering model
- a first training module configured to input a smoothing result corresponding to the time period to the collaborative filtering model; and perform training by using the collaborative filtering model to calculate a prediction of a plurality of users to be predicted in the sparse matrix value.
- a server comprising a memory and a processor, the memory storing computer executable instructions, the computer executable instructions being executed by the processor, such that the processor performs the following steps:
- the training is performed by the collaborative filtering model, and the predicted values of the plurality of users to be predicted in the sparse matrix are calculated.
- One or more non-volatile readable storage media storing computer-executable instructions, when executed by one or more processors, cause the one or more processors to perform the following steps:
- the training is performed by the collaborative filtering model, and the predicted values of the plurality of users to be predicted in the sparse matrix are calculated.
- FIG. 1 is an application scenario diagram of a collaborative filtering method for merging time factors in an embodiment
- FIG. 2 is a flow chart of a collaborative filtering method for merging time factors in an embodiment
- Figure 3 is a schematic illustration of recorded points in a two-dimensional space in one embodiment
- Figure 4 is a block diagram of a server in one embodiment
- Figure 5 is a block diagram of a collaborative filtering device incorporating time factors in one embodiment
- FIG. 6 is a block diagram of a collaborative filtering device incorporating time factors in another embodiment
- Figure 7 is a block diagram of a collaborative filtering device incorporating time factors in still another embodiment
- Figure 8 is a block diagram of a collaborative filtering device incorporating time factors in yet another embodiment.
- the collaborative filtering method of the fusion time factor provided in the embodiment of the present application can be applied to the application scenario shown in FIG. 1 .
- the terminal 102 and the server 104 are connected through a network. There may be multiple terminals 102.
- An application that can access the server is installed on the terminal 102.
- the server 104 returns a corresponding page to the terminal 102. Users can click, collect, and purchase products displayed on the page.
- the server 104 can collect the user identification and the user behavior described above.
- the server 104 obtains a user preference level value by collecting user behavior for a specified product within a preset time period.
- Server 104 is established Exponential smoothing model.
- the server 104 can formulate a corresponding time period for the exponential smoothing model, and there can be multiple time periods within the time period.
- the server 104 obtains a plurality of user identities and user preference values for the specified products over a plurality of time periods.
- the server 104 inputs the user preference level values corresponding to the plurality of time periods into the exponential smoothing model, and iteratively calculates the user preference level values of the plurality of time periods to obtain a plurality of smoothing results corresponding to the time periods.
- the server 104 generates a sparse matrix corresponding to the product identifier by using the smoothing result corresponding to the user identifier and the time period, and the sparse matrix includes a plurality of user preferences to be predicted.
- the server 104 acquires a collaborative filtering model, and inputs a smoothing result corresponding to the time period to the collaborative filtering model. Through the collaborative filtering model, the predicted values of the plurality of users to be predicted in the sparse matrix are calculated.
- a collaborative filtering method for merging time factors is provided. It should be understood that although the steps in the flowchart of FIG. 2 are sequentially displayed as indicated by the arrows, these steps are It is not necessarily performed in the order indicated by the arrows. Except as explicitly stated herein, the execution of these steps is not strictly limited, and may be performed in other sequences. Moreover, at least some of the steps in FIG. 2 may include a plurality of sub-steps or stages, which are not necessarily performed at the same time, but may be executed at different times, and the order of execution thereof is not necessarily This may be performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of the other steps.
- the method is applied to the server as an example, and specifically includes:
- step 202 an exponential smoothing model is established.
- Step 204 Obtain a time period for the exponential smoothing model, where the time period includes multiple time periods.
- User preference refers to the user's preference for a given product.
- User preferences can be expressed in numerical values.
- User preference data is pre-stored on the server.
- the user preference data includes a user identifier, a product identifier, and a corresponding user preference value.
- the user preference value may be obtained by the server collecting user behaviors for the specified product within a preset time period, and the user behavior includes: clicking, purchasing, and collecting.
- the degree of user preference may be corresponding to the time period. For different designated products, the time period corresponding to the user's preference may be the same or different. For example, in a game product, the time period corresponding to the degree of user preference may be one day. For insurance products, the time period corresponding to the user's preference may be one month or one month.
- the server establishes an exponential smoothing model.
- the user preference of multiple time periods is fused by an exponential smoothing model.
- the server can formulate a corresponding time period for the exponential smoothing model, and there can be multiple time periods in the time period.
- the time period can be formulated according to the specified product characteristics, and different designated products can be formulated for different time periods.
- the time period proposed for the exponential smoothing model of the wealth management product may be one month, and the time period within the time period may be in days.
- the time period proposed for the exponential smoothing model of the insurance product may be one year, and the time period within the time period may be in units of months.
- the exponential coefficient can reflect the importance of the time period affecting the user's preference. The larger the exponential coefficient, the more important the impact of the time period on user preference. The closer the time periods are to each other, the greater the impact on user preference.
- Step 206 Acquire multiple user identifiers and user preference values of the specified products for the specified products in multiple time periods.
- the time period defined by the server for the exponential smoothing model includes multiple time periods, and the server acquires multiple user identifiers and user preference values of the user identifiers for the specified products in multiple time periods.
- the user preference value of the specified product in multiple time periods may be a user preference value of the specified product, or may be a user preference value of the specified product.
- Step 208 Perform an iterative calculation on the user preference value by using an exponential smoothing model to obtain a smoothing result corresponding to the time period.
- the server inputs the user preference value corresponding to the multiple time periods into the exponential smoothing model, and iteratively calculates the user preference values of the multiple time periods to obtain a plurality of smoothing results corresponding to the time periods. Specifically, the server obtains an exponential coefficient corresponding to the exponential smoothing model according to the product identifier. The server multiplies the user preference value corresponding to the first time period in the proposed time period by the exponential coefficient, and uses the product as the initial value of the exponential smoothing model, and the initial value may also be referred to as the smoothing result corresponding to the first time period. .
- the server performs iterative calculation by using the smoothing result corresponding to the first time period, the user preference level corresponding to the second time period, and the exponential coefficient input exponential smoothing model to obtain a smoothing result corresponding to the second time period.
- the server calculates smooth results corresponding to multiple time periods.
- the model iteratively calculates the user preference values of the previous 4 days, and obtains the corresponding smoothing results as shown in Table 1:
- This combines the user preference value of the specified product with the time factor through an exponential smoothing model.
- Step 210 Generate a sparse matrix by using a user identifier and a smoothing result corresponding to the time period, where the sparse matrix includes a plurality of user preferences to be predicted.
- the server generates a sparse matrix corresponding to the product identifier and the product identifier by using the smoothing result corresponding to the user identifier and the time period.
- the sparse matrix may include multiple user identifiers and one product identifier, and may also include multiple user identifiers and multiple product identifiers.
- the coefficient matrix includes known user preferences Degree value and unknown user preference value. Among them, the unknown user preference value is also the predicted value of the user preference to be predicted.
- the predicted value of the user preference to be predicted can be represented by a preset character. For example, use? To represent.
- the row in the sparse matrix represents the product identifier
- the column represents the user identifier
- the value in the sparse matrix represents the user preference value of the product to the product, as shown in Table 2 below:
- the server obtains a smoothing result of the product identifier, the user identifier, and the user preference value in the current time period in the current time period to generate a sparsity corresponding to the user identifier and the product identifier. matrix.
- Step 212 Acquire a collaborative filtering model, and input the smoothing result corresponding to the time period to the collaborative filtering model.
- Step 214 Perform training through the collaborative filtering model, and calculate a predicted value of the plurality of users to be predicted in the sparse matrix.
- the collaborative filtering model can use the traditional collaborative filtering model.
- the server acquires a collaborative filtering model, and inputs the smoothing result corresponding to the time period to the collaborative filtering model.
- the collaborative filtering model Through the collaborative filtering model, the predicted values of the plurality of users to be predicted in the sparse matrix are calculated.
- the server when predicting an unknown user preference level value in the next time period, the server obtains a smoothing result of the plurality of user identifiers in the previous time period, and inputs the smoothing processing result of the previous time period to the collaborative filtering model.
- the collaborative filtering model Through the collaborative filtering model, the pre-predicted user preference in the sparse matrix corresponding to the user identifier and the product identifier is calculated in the next time period. Measured value.
- the user preference value in multiple time periods is iteratively calculated, and a smoothing result corresponding to the time period is obtained, so that the user preference value and the time factor of the specified product are performed.
- Effective integration When predicting an unknown user preference value in the next time period, the user identifier and the smoothing result corresponding to the time period may be used to generate a sparse matrix, and the smoothing result corresponding to the time period is input to the collaborative filtering model, and the collaborative filtering model is performed. Training, thereby calculating a predicted value of a plurality of user preferences to be predicted in the sparse matrix. Since the smoothed result input to the collaborative filtering model is fused with the time factor, it is possible to predict the user preference level value associated with the specified product and the time factor. Thereby, the combination time factor is used to effectively predict the user preference of the specified product.
- the method further includes: acquiring a dimension corresponding to the user preference value; and user preference of the plurality of dimensions according to the user identifier The value is statistically calculated; the statistical result is regularized to obtain a multi-dimensional vector corresponding to the user identifier; and the user similarity between the user identifiers is calculated according to the multi-dimensional vector.
- the similarity calculation may be performed on all the known and predicted user preference values, thereby obtaining the user preference program. Similar multiple user IDs.
- the server can use the product identification as the dimension corresponding to the user preference value. Different product identifiers are different dimensions.
- the user preference value can be regarded as a recorded point scattered in the space. Taking a map in which space is a two-dimensional space as an example, as shown in FIG. 3, each record point can be represented by longitude and latitude.
- the X axis in Figure 3 can represent dimensions and the Y axis represents longitude. It is assumed that the user preference value of the user identification 1 is represented by a black dot in the recording point in FIG. 3, and the user preference value of the user identification 2 is represented by a gray dot in the recording point in FIG. There are 4 record points for user ID 1, and 3 record points for user ID 2.
- the server clusters all the recorded points.
- the server can use the KMeans algorithm (a clustering algorithm) to cluster multiple classes.
- Each class can have a corresponding dimension.
- Each category includes a record point of a user preference value corresponding to a plurality of user IDs.
- the server collects statistics on user preference levels of multiple dimensions according to the user identifier, and obtains a statistical result of the user preference value.
- the server performs regularization processing on the statistical result to obtain a multi-dimensional vector corresponding to the user identifier, and calculates a similarity distance between the user identifiers according to the multi-dimensional vector, and uses the similarity distance as the similarity of the user's preference.
- the recording points corresponding to the user ID 1 and the user ID 2 in FIG. 3 are taken as an example for description.
- the server clusters the record points in Figure 3 to obtain three dimensions.
- the user identifier 1 has 2 record points in the first dimension, 1 record point in the second dimension, and 1 record point in the third dimension.
- User ID 2 has 2 record points in the first dimension, 1 record point in the second dimension, and 0 record points in the third dimension.
- the total number of record points of the user preference value corresponding to the user ID 1 of the server is 4, and the total number of record points of the user preference level corresponding to the user ID 2 is 3.
- the server performs regularization processing on the statistical result to obtain a multidimensional vector (2/4, 1/4, 1/4) corresponding to the user identifier 1 and a multidimensional vector corresponding to the user identifier 2 (2/4, 1/4, 1/4). ).
- the similarity distance between the user identifier 1 and the user identifier 2 is calculated according to the multi-dimensional vector, and the similarity distance is taken as the similarity of the user's preference.
- the calculation method of the similar distance may be various, for example, the calculation method using the Euclidean distance or the like to calculate the similarity distance.
- the method further includes: obtaining a positive sample and a negative sample corresponding to the user preference value according to the product identifier and the user identifier; splitting the negative sample to obtain a plurality of split negative samples, after splitting The difference between the number of negative samples and the number of positive samples is within a preset range; the classification model is obtained, and the classification model is trained by using the positive sample and the split negative sample to obtain a plurality of trained classification models; After training, the classification model is fitted and calculated for each training. The classification weight corresponding to the classification model.
- the server may further obtain a positive sample and a negative sample corresponding to the user preference level value according to the product identifier and the user identifier.
- a positive sample indicates that the user likes a product
- a negative sample indicates that the user does not like a product.
- the positive sample is that user 1 likes iPhone 7 (a type of mobile phone), and the negative sample is that user 2 does not like iPhone 7.
- the user preference value includes a known user preference value and a predicted user preference value.
- the server may perform classification training using known user preference values, or may perform classification training using known user preference values and predicted user preference values.
- Positive and negative samples can be collectively referred to as samples.
- Corresponding sample data is pre-stored on the server, and the sample data includes user characteristic data and product feature data.
- the user feature data includes the age and gender of the user, and the product feature data includes the product identifier and the product type.
- the number of users who like the new product is much smaller than the number of users who do not like the new product.
- the number of positive samples for a product is less than the number of negative samples.
- the traditional way is to obtain negative samples corresponding to the number of positive samples by under-sampling in negative samples, and use negative samples and positive samples of under-sampling for classification training.
- the traditional way is to copy the positive samples so that the number of positive samples is basically the same as the number of negative samples.
- the traditional method 2 does not add additional sample information, because the number of negative samples is much larger than the number of positive samples, after the positive samples are copied, the amount of data that needs to be calculated increases, which increases the computing burden of the server.
- the server obtains a positive sample and a negative sample corresponding to the user preference value according to the product identifier and the user identifier.
- the server splits the negative samples based on the number of positive samples. Split sample The difference between the number of the present and the number of positive samples is within a preset range.
- the number of negative samples after splitting is equal to or equal to the number of positive samples.
- the server obtains a classification model, wherein the classification model can adopt a traditional classification model.
- the server trains each split negative sample and positive sample input classification model to obtain the trained classification model with the same number of negative samples after splitting.
- the server obtains a regression model, wherein the regression model can adopt a traditional regression model.
- the server inputs the output results of the plurality of trained classification models to the regression model, and fits the plurality of trained classification models through the regression model, and calculates the classification weight corresponding to each of the trained classification models. Throughout the process, not only the full use of all sample data, but also the need to calculate the data has not surged, effectively alleviating the computing burden of the server.
- the method further comprises: acquiring sample data to be classified; and classifying the sample data by using the trained classification model and the classification weight.
- the server may obtain sample data to be classified, input the sample data to be classified into the trained classification model, and classify the classified sample data by using each trained classification model and classification weight. This allows for fast and efficient classification of sample data to be processed.
- a server in one embodiment, as shown in FIG. 4, includes a processor coupled via a system bus, an internal memory, a non-volatile storage medium, and a network interface.
- the non-volatile storage medium of the server stores an operating system and computer executable instructions, and the computer executable instructions are used to implement a collaborative filtering method suitable for a server.
- the processor is used to provide computing and control capabilities to support the operation of the entire server.
- the network interface is used to communicate with external terminals via a network connection.
- the server can be implemented with a stand-alone server or a server cluster consisting of multiple servers. It will be understood by those skilled in the art that the structure shown in FIG.
- FIG. 4 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the server to which the solution of the present application is applied.
- the specific server may include a ratio. More or fewer components are shown in the figures, or some components are combined, or have different component arrangements.
- a collaborative filtering device that integrates time factors is provided, including: a model establishing module 502, an obtaining module 504, a smoothing module 506, and a matrix generating module. 508 and a first training module 510, wherein:
- the model building module 502 is configured to establish an exponential smoothing model.
- the obtaining module 504 is configured to obtain a time period for the exponential smoothing model, where the time period includes a plurality of time periods, and obtain a plurality of user identifiers and user preference values of the specified products for the specified products in the plurality of time periods.
- the smoothing module 506 is configured to perform an iterative calculation on the user preference level value by using an exponential smoothing model to obtain a smoothing result corresponding to the time period.
- the matrix generation module 508 is configured to generate a sparse matrix by using a user identifier and a smoothing result corresponding to the time period, where the sparse matrix includes a plurality of user preferences to be predicted.
- the obtaining module 504 is further configured to acquire a collaborative filtering model.
- the first training module 510 is configured to input the smoothing result corresponding to the time period to the collaborative filtering model; and perform training through the collaborative filtering model to calculate a predicted value of the plurality of users to be predicted in the sparse matrix.
- the formula for the smoothing model includes:
- a represents the index coefficient corresponding to the product identifier
- P t+1 represents the user preference level value corresponding to the next time period
- P t represents the user preference level value corresponding to the current time period
- P t-1 represents the previous time period corresponding to User preference value.
- the obtaining module 504 is further configured to obtain a dimension corresponding to the user preference value.
- the device further includes: a statistics module 512, a regularization module 514, and a similarity calculation module 516, where:
- the statistics module 512 is configured to perform statistics on user preference levels of multiple dimensions according to the user identifier.
- the regularization module 514 is configured to perform regularization processing on the statistical result to obtain a multi-dimensional vector corresponding to the user identifier.
- the similarity calculation module 516 is configured to calculate, according to the multi-dimensional vector, the similarity of user preferences between the user identifiers.
- the obtaining module 504 is further configured to obtain a positive sample and a negative sample corresponding to the user preference value according to the product identifier and the user identifier; as shown in FIG. 6, the device further includes: a split module 518, and a second training. Module 520 and fitting module 522, wherein:
- the splitting module 518 is configured to split the negative sample to obtain a plurality of split negative samples, and the difference between the number of split negative samples and the number of positive samples is within a preset range.
- the acquisition module 504 is also used to acquire a classification model.
- the second training module 520 is configured to train the classification model by using the positive sample and the split negative sample to obtain a plurality of trained classification models.
- the fitting module 522 is configured to fit a plurality of trained classification models, and calculate a classification weight corresponding to each of the trained classification models.
- the obtaining module 504 is further configured to acquire sample data to be classified; as shown in FIG. 7, the apparatus further includes: a classification module 524, configured to classify the sample data by using the trained classification model and the classification weight .
- the various modules in the above-described collaborative time factor collaborative filtering device may be implemented in whole or in part by software, hardware, and combinations thereof.
- the above modules may be embedded in the hardware of the base station or may be stored in the memory of the base station in a software form, so that the processor can call the corresponding operations of the above modules.
- the processor may be a central processing unit (CPU) or a microprocessor.
- one or more non-volatile readable storage media storing computer-executable instructions are provided, the computer-executable instructions being executed by one or more processors such that the one or more The processors perform the following steps:
- the training is performed by the collaborative filtering model, and the predicted values of the plurality of users to be predicted in the sparse matrix are calculated.
- the formula of the smoothing model includes:
- a represents the index coefficient corresponding to the product identifier
- P t+1 represents the user preference level value corresponding to the next time period
- P t represents the user preference level value corresponding to the current time period
- P t-1 represents the previous time period corresponding to User preference value.
- the one or more processors perform the following steps: obtaining a dimension corresponding to the user preference value; performing statistics on the user preference values of the multiple dimensions according to the user identifier; performing regularization processing on the statistical result to obtain a multi-dimensional vector corresponding to the user identifier And calculating a similarity of the user preferences of the user identifiers according to the multi-dimensional vector.
- the one or more processors when the computer executable instructions are executed by one or more processors, the one or more processors further cause the step of: obtaining a positive corresponding to the user preference level value based on the product identification and the user identification a sample and a negative sample; splitting the negative sample to obtain a plurality of split negative samples, wherein the difference between the number of the split negative samples and the number of the positive samples is within a preset range; Obtaining a classification model, training the classification model by using the positive sample and the split negative sample to obtain a plurality of trained classification models; and fitting the plurality of trained classification models, The classification weight corresponding to each trained classification model is calculated.
- the computer executable instructions are executed by one or more processors, further causing the one or more The processor performs the following steps: obtaining sample data to be classified; and using the training The trained classification model and the classification weights classify the sample data to be classified.
- the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or the like.
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Accounting & Taxation (AREA)
- Finance (AREA)
- Development Economics (AREA)
- Strategic Management (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Entrepreneurship & Innovation (AREA)
- Data Mining & Analysis (AREA)
- General Business, Economics & Management (AREA)
- Economics (AREA)
- Marketing (AREA)
- Game Theory and Decision Science (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Complex Calculations (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Description
Claims (20)
- 一种融合时间因素的协同过滤方法,包括:建立指数平滑模型;获取对所述指数平滑模型拟定的时间段,所述时间段包括多个时间周期;获取多个用户标识以及用户标识在多个时间周期内对指定产品的用户喜好程度值;利用所述指数平滑模型对所述用户喜好程度值进行迭代计算,得到与时间周期对应的平滑结果;利用所述用户标识和所述与时间周期对应的平滑结果生成稀疏矩阵,所述稀疏矩阵包括多个待预测用户喜好程度;获取协同过滤模型,将所述时间周期对应的平滑结果输入至所述协同过滤模型;及通过所述协同过滤模型进行训练,计算得到所述稀疏矩阵中的多个待预测用户喜好程度的预测值。
- 根据权利要求1所述的方法,其特征在于,所述平滑模型的公式包括:Pt+1=a*Pt+(1-a)*Pt-1;其中,a表示产品标识对应的指数系数;Pt+1表示下一个时间周期对应的用户喜好程度值;Pt表示当前时间周期对应的用户喜好程度值;Pt-1表示上一个时间周期对应的用户喜好程度值。
- 根据权利要求1所述的方法,其特征在于,在所述计算得到所述稀疏矩阵中的多个待预测用户喜好程度的预测值的步骤之后,所述方法还包括:获取用户喜好程度值对应的维度;根据用户标识对多个维度的用户喜好程度值进行统计;对统计结果进行正则化处理,得到用户标识对应的多维向量;及根据所述多维向量计算用户标识彼此之间的用户喜好的相似度。
- 根据权利要求1所述的方法,其特征在于,所述方法还包括:根据产品标识和用户标识获取用户喜好程度值对应的正样本和负样本;将所述负样本进行拆分,得到多个拆分后的负样本,所述拆分后的负样本的数量与所述正样本的数量的差值在预设范围内;获取分类模型,利用所述正样本和所述拆分后的负样本对所述分类模型进行训练,得到多个训练后的分类模型;及对所述多个训练后的分类模型进行拟合,计算得到每个训练后的分类模型对应的分类权重。
- 根据权利要求4所述的方法,其特征在于,在所述计算得到每个训练后的分类模型对应的分类权重的步骤之后,所述方法还包括:获取待分类样本数据;及利用所述训练后的分类模型和所述分类权重对所述待分类样本数据进行分类。
- 一种融合时间因素的协同过滤装置,包括:模型建立模块,用于建立指数平滑模型;获取模块,用于获取对所述指数平滑模型拟定的时间段,所述时间段包括多个时间周期;获取多个用户标识以及用户标识在多个时间周期内对指定产品的用户喜好程度值;平滑模块,用于利用所述指数平滑模型对所述用户喜好程度值进行迭代计算,得到与时间周期对应的平滑结果;矩阵生成模块,用于利用所述用户标识和所述与时间周期对应的平滑结果生成稀疏矩阵,所述稀疏矩阵包括多个待预测用户喜好程度;所述获取模块还用于获取协同过滤模型;及第一训练模块,用于将所述时间周期对应的平滑结果输入至所述协同过滤模型;通过所述协同过滤模型进行训练,计算得到所述稀疏矩阵中的多个待预测用户喜好程度的预测值。
- 根据权利要求6所述的装置,其特征在于,所述平滑模型的公式包括:Pt+1=a*Pt+(1-a)*Pt-1;其中,a表示产品标识对应的指数系数;Pt+1表示下一个时间周期对应的用户喜好程度值;Pt表示当前时间周期对应的用户喜好程度值;Pt-1表示上一个时间周期对应的用户喜好程度值。
- 根据权利要求6所述的装置,其特征在于,所述获取模块还用于获取用户喜好程度值对应的维度;所述装置还包括:统计模块,用于根据用户标识对多个维度的用户喜好程度值进行统计;正则化模块,用于对统计结果进行正则化处理,得到用户标识对应的多维向量;及相似度计算模块,用于根据所述多维向量计算用户标识彼此之间的用户喜好的相似度。
- 根据权利要求6所述的装置,其特征在于,所述获取模块还用于根据产品标识和用户标识获取用户喜好程度值对应的正样本和负样本;所述装置还包括:拆分模块,用于将所述负样本进行拆分,得到多个拆分后的负样本,所述拆分后的负样本的数量与所述正样本的数量的差值在预设范围内;所述获取模块还用于获取分类模型;第二训练模块,用于利用所述正样本和所述拆分后的负样本对所述分类模型进行训练,得到多个训练后的分类模型;及拟合模块,用于对所述多个训练后的分类模型进行拟合,计算得到每个训练后的分类模型对应的分类权重。
- 根据权利要求9所述的装置,其特征在于,所述获取模块还用于获取待分类样本数据;所述装置还包括:分类模块,用于利用所述训练后的分类模型和所述分类权重对所述待分类样本数据进行分类。
- 一种服务器,包括存储器和处理器,所述存储器中储存有计算机可执行指令,所述计算机可执行指令被所述处理器执行时时,使得所述处理器执行以下步骤:建立指数平滑模型;获取对所述指数平滑模型拟定的时间段,所述时间段包括多个时间周期;获取多个用户标识以及用户标识在多个时间周期内对指定产品的用户喜好程度值;利用所述指数平滑模型对所述用户喜好程度值进行迭代计算,得到与时间周期对应的平滑结果;利用所述用户标识和所述与时间周期对应的平滑结果生成稀疏矩阵,所述稀疏矩阵包括多个待预测用户喜好程度;获取协同过滤模型,将所述时间周期对应的平滑结果输入至所述协同过滤模型;及通过所述协同过滤模型进行训练,计算得到所述稀疏矩阵中的多个待预测用户喜好程度的预测值。
- 根据权利要求11所述的服务器,其特征在于,所述平滑模型的公式包括:Pt+1=a*Pt+(1-a)*Pt-1;其中,a表示产品标识对应的指数系数;Pt+1表示下一个时间周期对应的用户喜好程度值;Pt表示当前时间周期对应的用户喜好程度值;Pt-1表示上一个时间周期对应的用户喜好程度值。
- 根据权利要求11所述的服务器,其特征在于,在所述计算得到所述稀疏矩阵中的多个待预测用户喜好程度的预测值的步骤之后,还使得所述处理器执行以下步骤:获取用户喜好程度值对应的维度;根据用户标识对多个维度的用户喜好程度值进行统计;对统计结果进行正则化处理,得到用户标识对应的多维向量;及根据所述多维向量计算用户标识彼此之间的用户喜好的相似度。
- 根据权利要求11所述的服务器,其特征在于,还使得所述处理器执行以下步骤:根据产品标识和用户标识获取用户喜好程度值对应的正样本和负样本;将所述负样本进行拆分,得到多个拆分后的负样本,所述拆分后的负样本的数量与所述正样本的数量的差值在预设范围内;获取分类模型,利用所述正样本和所述拆分后的负样本对所述分类模型进行训练,得到多个训练后的分类模型;及对所述多个训练后的分类模型进行拟合,计算得到每个训练后的分类模型对应的分类权重。
- 根据权利要求14所述的服务器,其特征在于,在所述计算得到每个训练后的分类模型对应的分类权重的步骤之后,还使得所述处理器执行以下步骤:获取待分类样本数据;及利用所述训练后的分类模型和所述分类权重对所述待分类样本数据进行分类。
- 一个或多个存储有计算机可执行指令的非易失性可读存储介质,所述计算机可执行指令被一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:建立指数平滑模型;获取对所述指数平滑模型拟定的时间段,所述时间段包括多个时间周期;获取多个用户标识以及用户标识在多个时间周期内对指定产品的用户喜好程度值;利用所述指数平滑模型对所述用户喜好程度值进行迭代计算,得到与时间周期对应的平滑结果;利用所述用户标识和所述与时间周期对应的平滑结果生成稀疏矩阵,所述稀疏矩阵包括多个待预测用户喜好程度;获取协同过滤模型,将所述时间周期对应的平滑结果输入至所述协同过滤模型;及通过所述协同过滤模型进行训练,计算得到所述稀疏矩阵中的多个待预测用户喜好程度的预测值。
- 根据权利要求16所述的非易失性可读存储介质,其特征在于,所述平滑模型的公式包括:Pt+1=a*Pt+(1-a)*Pt-1;其中,a表示产品标识对应的指数系数;Pt+1表示下一个时间周期对应的用户喜好程度值;Pt表示当前时间周期对应的用户喜好程度值;Pt-1表示上一个时间周期对应的用户喜好程度值。
- 根据权利要求16所述的非易失性可读存储介质,其特征在于,在所述计算得到所述稀疏矩阵中的多个待预测用户喜好程度的预测值的步骤之后,所述计算机可执行指令被一个或多个处理器执行时,还使得所述一个或多个处理器执行以下步骤:获取用户喜好程度值对应的维度;根据用户标识对多个维度的用户喜好程度值进行统计;对统计结果进行正则化处理,得到用户标识对应的多维向量;及根据所述多维向量计算用户标识彼此之间的用户喜好的相似度。
- 根据权利要求16所述的非易失性可读存储介质,其特征在于,所述计算机可执行指令被一个或多个处理器执行时,还使得所述一个或多个处理器执行以下步骤:根据产品标识和用户标识获取用户喜好程度值对应的正样本和负样本;将所述负样本进行拆分,得到多个拆分后的负样本,所述拆分后的负样本的数量与所述正样本的数量的差值在预设范围内;获取分类模型,利用所述正样本和所述拆分后的负样本对所述分类模型进行训练,得到多个训练后的分类模型;及对所述多个训练后的分类模型进行拟合,计算得到每个训练后的分类模 型对应的分类权重。
- 根据权利要求19所述的非易失性可读存储介质,其特征在于,在所述计算得到每个训练后的分类模型对应的分类权重的步骤之后,所述计算机可执行指令被一个或多个处理器执行时,还使得所述一个或多个处理器执行以下步骤:获取待分类样本数据;及利用所述训练后的分类模型和所述分类权重对所述待分类样本数据进行分类。
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/578,368 US10565525B2 (en) | 2016-11-15 | 2017-04-06 | Collaborative filtering method, apparatus, server and storage medium in combination with time factor |
KR1020187015328A KR102251302B1 (ko) | 2016-11-15 | 2017-04-06 | 시간 인자와 결합한 협업 필터링 방법, 장치, 서버 및 저장 매체 |
SG11201709930TA SG11201709930TA (en) | 2016-11-15 | 2017-04-06 | Collaborative filtering method, apparatus, server and storage medium in combination with time factor |
JP2017566628A JP6484730B2 (ja) | 2016-11-15 | 2017-04-06 | 時間因子を融合させる協調フィルタリング方法、装置、サーバおよび記憶媒体 |
EP17801315.7A EP3543941A4 (en) | 2016-11-15 | 2017-04-06 | METHOD, DEVICE, SERVER AND STORAGE MEDIUM FOR COLLABORATIVE TIME FACTOR FILTER FILTERING |
AU2017268629A AU2017268629A1 (en) | 2016-11-15 | 2017-04-06 | Collaborative filtering method, apparatus, server and storage medium in combination with time factor |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611005200.1A CN106530010B (zh) | 2016-11-15 | 2016-11-15 | 融合时间因素的协同过滤方法和装置 |
CN201611005200.1 | 2016-11-15 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018090545A1 true WO2018090545A1 (zh) | 2018-05-24 |
Family
ID=58353220
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/079565 WO2018090545A1 (zh) | 2016-11-15 | 2017-04-06 | 融合时间因素的协同过滤方法、装置、服务器和存储介质 |
Country Status (9)
Country | Link |
---|---|
US (1) | US10565525B2 (zh) |
EP (1) | EP3543941A4 (zh) |
JP (1) | JP6484730B2 (zh) |
KR (1) | KR102251302B1 (zh) |
CN (1) | CN106530010B (zh) |
AU (2) | AU2017268629A1 (zh) |
SG (1) | SG11201709930TA (zh) |
TW (1) | TWI658420B (zh) |
WO (1) | WO2018090545A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111209977A (zh) * | 2020-01-16 | 2020-05-29 | 北京百度网讯科技有限公司 | 分类模型的训练和使用方法、装置、设备和介质 |
CN111652741A (zh) * | 2020-04-30 | 2020-09-11 | 中国平安财产保险股份有限公司 | 用户偏好分析方法、装置及可读存储介质 |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110378731B (zh) * | 2016-04-29 | 2021-04-20 | 腾讯科技(深圳)有限公司 | 获取用户画像的方法、装置、服务器及存储介质 |
CN106530010B (zh) | 2016-11-15 | 2017-12-12 | 平安科技(深圳)有限公司 | 融合时间因素的协同过滤方法和装置 |
CN107633254A (zh) * | 2017-07-25 | 2018-01-26 | 平安科技(深圳)有限公司 | 建立预测模型的装置、方法及计算机可读存储介质 |
CN109060332A (zh) * | 2018-08-13 | 2018-12-21 | 重庆工商大学 | 一种基于协同过滤融合进行声波信号分析的机械设备诊断法 |
CN109800359B (zh) * | 2018-12-20 | 2021-08-17 | 北京百度网讯科技有限公司 | 信息推荐处理方法、装置、电子设备及可读存储介质 |
US10984461B2 (en) * | 2018-12-26 | 2021-04-20 | Paypal, Inc. | System and method for making content-based recommendations using a user profile likelihood model |
CN110580311B (zh) * | 2019-06-21 | 2023-08-01 | 东华大学 | 动态时间感知协同过滤方法 |
CN110458664B (zh) * | 2019-08-06 | 2021-02-02 | 上海新共赢信息科技有限公司 | 一种用户出行信息预测方法、装置、设备及存储介质 |
CN110992127B (zh) * | 2019-11-14 | 2023-09-29 | 北京沃东天骏信息技术有限公司 | 一种物品推荐方法及装置 |
CN111178986B (zh) * | 2020-02-18 | 2023-04-07 | 电子科技大学 | 用户-商品偏好的预测方法及系统 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103235823A (zh) * | 2013-05-06 | 2013-08-07 | 上海河广信息科技有限公司 | 根据相关网页和当前行为确定用户当前兴趣的方法和系统 |
US20150039620A1 (en) * | 2013-07-31 | 2015-02-05 | Google Inc. | Creating personalized and continuous playlists for a content sharing platform based on user history |
CN105975483A (zh) * | 2016-04-25 | 2016-09-28 | 北京三快在线科技有限公司 | 一种基于用户偏好的消息推送方法和平台 |
CN106530010A (zh) * | 2016-11-15 | 2017-03-22 | 平安科技(深圳)有限公司 | 融合时间因素的协同过滤方法和装置 |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AU2438602A (en) * | 2000-10-18 | 2002-04-29 | Johnson & Johnson Consumer | Intelligent performance-based product recommendation system |
US8392228B2 (en) * | 2010-03-24 | 2013-03-05 | One Network Enterprises, Inc. | Computer program product and method for sales forecasting and adjusting a sales forecast |
US7657526B2 (en) * | 2006-03-06 | 2010-02-02 | Veveo, Inc. | Methods and systems for selecting and presenting content based on activity level spikes associated with the content |
JP5778626B2 (ja) | 2012-06-18 | 2015-09-16 | 日本電信電話株式会社 | アイテム利用促進装置、アイテム利用促進装置の動作方法およびコンピュータプログラム |
KR101676219B1 (ko) * | 2012-12-17 | 2016-11-14 | 아마데우스 에스.에이.에스. | 상호작용식 검색 폼을 위한 추천 엔진 |
US9348924B2 (en) * | 2013-03-15 | 2016-05-24 | Yahoo! Inc. | Almost online large scale collaborative filtering based recommendation system |
US20140272914A1 (en) | 2013-03-15 | 2014-09-18 | William Marsh Rice University | Sparse Factor Analysis for Learning Analytics and Content Analytics |
CN103473354A (zh) * | 2013-09-25 | 2013-12-25 | 焦点科技股份有限公司 | 基于电子商务平台的保险推荐系统框架及保险推荐方法 |
CN104021163B (zh) * | 2014-05-28 | 2017-10-24 | 深圳市盛讯达科技股份有限公司 | 产品推荐系统及方法 |
CN104166668B (zh) * | 2014-06-09 | 2018-02-23 | 南京邮电大学 | 基于folfm模型的新闻推荐系统及方法 |
KR101658714B1 (ko) * | 2014-12-22 | 2016-09-21 | 연세대학교 산학협력단 | 온라인 활동 이력에 기초한 사용자의 온라인 활동 예측 방법 및 시스템 |
CN105205128B (zh) * | 2015-09-14 | 2018-08-28 | 清华大学 | 基于评分特征的时序推荐方法及推荐装置 |
CN106960354A (zh) * | 2016-01-11 | 2017-07-18 | 中国移动通信集团河北有限公司 | 一种基于客户生命周期的精准化推荐方法及装置 |
CN107194754A (zh) * | 2017-04-11 | 2017-09-22 | 美林数据技术股份有限公司 | 基于混合协同过滤的券商产品推荐方法 |
CN107169052B (zh) * | 2017-04-26 | 2019-03-05 | 北京小度信息科技有限公司 | 推荐方法及装置 |
-
2016
- 2016-11-15 CN CN201611005200.1A patent/CN106530010B/zh active Active
-
2017
- 2017-04-06 SG SG11201709930TA patent/SG11201709930TA/en unknown
- 2017-04-06 US US15/578,368 patent/US10565525B2/en active Active
- 2017-04-06 JP JP2017566628A patent/JP6484730B2/ja active Active
- 2017-04-06 EP EP17801315.7A patent/EP3543941A4/en not_active Withdrawn
- 2017-04-06 AU AU2017268629A patent/AU2017268629A1/en active Pending
- 2017-04-06 KR KR1020187015328A patent/KR102251302B1/ko active IP Right Grant
- 2017-04-06 AU AU2017101862A patent/AU2017101862A4/en active Active
- 2017-04-06 WO PCT/CN2017/079565 patent/WO2018090545A1/zh active Application Filing
- 2017-11-15 TW TW106139384A patent/TWI658420B/zh active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103235823A (zh) * | 2013-05-06 | 2013-08-07 | 上海河广信息科技有限公司 | 根据相关网页和当前行为确定用户当前兴趣的方法和系统 |
US20150039620A1 (en) * | 2013-07-31 | 2015-02-05 | Google Inc. | Creating personalized and continuous playlists for a content sharing platform based on user history |
CN105975483A (zh) * | 2016-04-25 | 2016-09-28 | 北京三快在线科技有限公司 | 一种基于用户偏好的消息推送方法和平台 |
CN106530010A (zh) * | 2016-11-15 | 2017-03-22 | 平安科技(深圳)有限公司 | 融合时间因素的协同过滤方法和装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3543941A4 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111209977A (zh) * | 2020-01-16 | 2020-05-29 | 北京百度网讯科技有限公司 | 分类模型的训练和使用方法、装置、设备和介质 |
CN111209977B (zh) * | 2020-01-16 | 2024-01-05 | 北京百度网讯科技有限公司 | 分类模型的训练和使用方法、装置、设备和介质 |
CN111652741A (zh) * | 2020-04-30 | 2020-09-11 | 中国平安财产保险股份有限公司 | 用户偏好分析方法、装置及可读存储介质 |
CN111652741B (zh) * | 2020-04-30 | 2023-06-09 | 中国平安财产保险股份有限公司 | 用户偏好分析方法、装置及可读存储介质 |
Also Published As
Publication number | Publication date |
---|---|
US10565525B2 (en) | 2020-02-18 |
JP6484730B2 (ja) | 2019-03-13 |
CN106530010B (zh) | 2017-12-12 |
TWI658420B (zh) | 2019-05-01 |
JP2019507398A (ja) | 2019-03-14 |
AU2017101862A4 (en) | 2019-10-31 |
SG11201709930TA (en) | 2018-06-28 |
KR20190084866A (ko) | 2019-07-17 |
US20180300648A1 (en) | 2018-10-18 |
CN106530010A (zh) | 2017-03-22 |
EP3543941A4 (en) | 2020-07-29 |
KR102251302B1 (ko) | 2021-05-13 |
EP3543941A1 (en) | 2019-09-25 |
AU2017268629A1 (en) | 2018-05-31 |
TW201820231A (zh) | 2018-06-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018090545A1 (zh) | 融合时间因素的协同过滤方法、装置、服务器和存储介质 | |
US10846643B2 (en) | Method and system for predicting task completion of a time period based on task completion rates and data trend of prior time periods in view of attributes of tasks using machine learning models | |
TWI702844B (zh) | 用戶特徵的生成方法、裝置、設備及儲存介質 | |
CN108829808B (zh) | 一种页面个性化排序方法、装置及电子设备 | |
CN106503022B (zh) | 推送推荐信息的方法和装置 | |
US11109083B2 (en) | Utilizing a deep generative model with task embedding for personalized targeting of digital content through multiple channels across client devices | |
WO2019076173A1 (zh) | 内容推送方法、装置及计算机设备 | |
US20210056458A1 (en) | Predicting a persona class based on overlap-agnostic machine learning models for distributing persona-based digital content | |
CN110321422A (zh) | 在线训练模型的方法、推送方法、装置以及设备 | |
US11429653B2 (en) | Generating estimated trait-intersection counts utilizing semantic-trait embeddings and machine learning | |
JP2017535857A (ja) | 変換されたデータを用いた学習 | |
CN111783810B (zh) | 用于确定用户的属性信息的方法和装置 | |
US10318540B1 (en) | Providing an explanation of a missing fact estimate | |
WO2019061664A1 (zh) | 电子装置、基于用户上网数据的产品推荐方法及存储介质 | |
US20190273789A1 (en) | Establishing and utilizing behavioral data thresholds for deep learning and other models to identify users across digital space | |
US20210241072A1 (en) | Systems and methods of business categorization and service recommendation | |
Inoue et al. | Estimating customer impatience in a service system with unobserved balking | |
Song et al. | A novel QoS-aware prediction approach for dynamic web services | |
US20140324524A1 (en) | Evolving a capped customer linkage model using genetic models | |
Mahendran et al. | A model robust subsampling approach for Generalised Linear Models in big data settings | |
CN110704648B (zh) | 用户行为属性的确定方法、装置、服务器及存储介质 | |
US20230126932A1 (en) | Recommended audience size | |
US20230316325A1 (en) | Generation and implementation of a configurable measurement platform using artificial intelligence (ai) and machine learning (ml) based techniques | |
CN117319475A (zh) | 通信资源推荐方法、装置、计算机设备和存储介质 | |
CN118227677A (zh) | 信息推荐及信息推荐模型处理方法、装置、设备和介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 11201709930T Country of ref document: SG Ref document number: 15578368 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2017566628 Country of ref document: JP |
|
ENP | Entry into the national phase |
Ref document number: 20187015328 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2017268629 Country of ref document: AU Date of ref document: 20170406 Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17801315 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |