Nothing Special   »   [go: up one dir, main page]

CN107944035B - Image recommendation method integrating visual features and user scores - Google Patents

Image recommendation method integrating visual features and user scores Download PDF

Info

Publication number
CN107944035B
CN107944035B CN201711330059.7A CN201711330059A CN107944035B CN 107944035 B CN107944035 B CN 107944035B CN 201711330059 A CN201711330059 A CN 201711330059A CN 107944035 B CN107944035 B CN 107944035B
Authority
CN
China
Prior art keywords
user
image
article
representing
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711330059.7A
Other languages
Chinese (zh)
Other versions
CN107944035A (en
Inventor
薛峰
孙健
陈思洋
路强
余烨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Weimubingzhi Technology Co Ltd
Original Assignee
Hefei Weimubingzhi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Weimubingzhi Technology Co Ltd filed Critical Hefei Weimubingzhi Technology Co Ltd
Priority to CN201711330059.7A priority Critical patent/CN107944035B/en
Publication of CN107944035A publication Critical patent/CN107944035A/en
Application granted granted Critical
Publication of CN107944035B publication Critical patent/CN107944035B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0631Item recommendations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses an image recommendation method integrating visual features and user scores, which comprises the following steps of: 1. crawling a data set and extracting an article image and a scoring matrix of a user for the corresponding article image from the data set; 2. extracting visual features of all collected article images by using a Convolutional Neural Network (CNN) to obtain a visual feature matrix; 3. establishing a prediction preference model, and updating the prediction preference model by using an element-based alternating least square method; 4. and obtaining the preference values of the user to all the article images from the final prediction preference model, sorting the preference values in a descending order, and selecting the article images corresponding to the top-to preference values to recommend to the user. The invention fuses the visual characteristics and the user scores, thereby improving the recommendation precision and realizing personalized recommendation.

Description

Image recommendation method integrating visual features and user scores
Technical Field
The invention belongs to the technical field of image processing based on a computer vision technology, and mainly relates to an image recommendation method based on matrix decomposition.
Background
In recent years, along with the rapid development of electronic commerce, a great amount of network image data is generated, and in the face of such a great amount of image data, a user wants to be able to quickly locate image information of interest of the user, and search becomes a necessary function for realizing the purpose, while the search is a service request initiated by the user actively, in order to enable the system to actively provide services for customers, an image recommendation system is provided, and image content which is most likely to be of interest to the user in an image database is recommended for the user by analyzing historical data of interest to the user and image data in the image database, that is, an image closest to the image of interest to the user historical is recommended to the user.
Most of the commercial product search systems currently used in large e-commerce websites use keyword-based searches, such as Taobao, Amazon, etc., the image retrieval system based on the keywords requires that the commodity image must be added with the relevant text description information of the name, the category and the like of the commodity, and then the search keywords input by the user are matched with the text description of the commodity, however, the text information is difficult to completely describe all the characteristics of the commodity, and the influence of user subjective factors is very large, so that the commodity description information input by the user is difficult to objectively and accurately, different commodity requirements can reflect the same keywords under the user subjective condition, or the same commodity requirements reflect different keywords, so that the returned image sets have great difference, and the efficiency of searching the interested images by the user is greatly reduced. In the searching process based on the keywords, a large amount of time and labor are consumed for sorting the additional text information of the standard commodities, and the searching keywords also have great influence on the searching results due to the influence of the subjective factors of the user. How to reduce the influence of these factors on the search results is attracting more and more attention, and the problem proposed above can be effectively solved by using image content to perform relevant search and reducing the dependence on text information.
The traditional image retrieval based on image content extracts the visual features of the image through the visual features of the color, texture or shape of the image, and the retrieval method is influenced by the environment and image shooting equipment when the image is shot, and can seriously influence the image search result. How to reduce the influence of these factors on the retrieval result as much as possible is still a difficulty. Moreover, the traditional image recommendation only focuses on the attributes of the articles, cannot take the personal preference and interest of the user into consideration, and cannot realize accurate personalized recommendation.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides an image recommendation method integrating visual features and user scores so as to improve recommendation precision and realize personalized recommendation.
The invention adopts the following technical scheme for solving the technical problems:
the invention relates to an image recommendation method integrating visual features and user scores, which is characterized by comprising the following steps of:
step 1, crawling an article image set P and a corresponding article scoring data set Q from a website through a web crawler;
step 2, extracting N article images from the article image set P, and extracting evaluation information of M users on the N article images from the corresponding article scoring data set Q, thereby obtaining a scoring matrix of the M users on the N article images
Figure GDA0002554978340000021
And the score of any user u on any item image i in the scoring matrix Y is recorded as YuiIf y isui1 denotes that the user u evaluated the item corresponding to the item image i, and yui0 indicates that the user u does not evaluate the item corresponding to the item image i;
step 3, carrying out normalization processing on the N article images to obtain an image set C;
step 4, respectively extracting the characteristics of the N images in the image set C by using a convolutional neural network CNN to obtain visual characteristic matrixes of the N images
Figure GDA0002554978340000022
Where T represents the dimension of the visual feature of each image, each column vector
Figure GDA0002554978340000023
Representing a visual feature vector corresponding to the image i;
and 5, establishing a prediction preference model by using the formula (1):
Figure GDA0002554978340000024
in the formula (1), the reaction mixture is,
Figure GDA0002554978340000025
representing user uPotential feature vectors, K representing potential feature dimensions,
Figure GDA0002554978340000026
representing a transformation matrix for transforming the visual feature vector f of the image iiConverting into an embedded vector; efiA potential feature vector representing the image i,
Figure GDA0002554978340000027
representing predicted user u preferences for image i;
step 6, updating the prediction preference model by using an element-based alternating least square method;
step 6.1, obtaining a loss function L by using the formula (2):
Figure GDA0002554978340000028
in the formula (2), Y represents the set of item images evaluated in the evaluation matrix Y, and wuiRepresenting the weight of any user u scoring any item image i in the scoring matrix Y,
Figure GDA0002554978340000029
after the k-th row vector in the transformation matrix E is removed, the preference of any user u to any article image i is shown, and
Figure GDA00025549783400000210
pukpotential feature vector p representing user uuThe k-th dimension value of (1); c. CiRepresenting the weights of the item images i that were not evaluated in the scoring matrix Y, lambda represents the parameters of the L2 regularization,
Figure GDA00025549783400000211
representing the kth row vector in the transformation matrix E;
step 6.2, defining a loop variable of α and initializing α to 0, defining a maximum loop number of αmaxRandomly initializing the parameters of the prediction preference model of the α th cycle by using the standard normal distribution
Figure GDA0002554978340000031
Wherein,
Figure GDA0002554978340000032
potential feature vector, E, representing user u in the α th loopαA transformation matrix representing the α th cycle;
step 6.3, updating the potential feature vector p of the user u in the α th circulation by using the formula (3)uK-th dimension value of
Figure GDA0002554978340000033
Figure GDA0002554978340000034
In the formula (3), the reaction mixture is,
Figure GDA0002554978340000035
denotes the k row vector, y, in the transformation matrix E at the α th cycleuRepresenting a set of item images evaluated by user u in a scoring matrix Y;
step 6.4, adopting an element-by-element updating strategy, and updating the α th cycle transformation matrix E by using the formula (4)αJ-dimension value of k-th row vector
Figure GDA0002554978340000036
Figure GDA0002554978340000037
In the formula (4), fijThe j-th dimension of the feature value of the item image i in the visual feature matrix F,
Figure GDA0002554978340000038
denotes the transformation matrix E at the α th cycleαIn the k-th row vector, eliminating the potential characteristic value of the j-th dimension value of the article image i in the visual characteristic matrix F;
Figure GDA0002554978340000039
after the k-th row vector in the transformation matrix E is removed in the α th cycle, the preference of any user u on any article image i;
step 6.5, assigning α +1 to α, and judging α > αmaxWhether the optimal prediction preference model parameter is obtained or not is judged, if yes, the optimal prediction preference model parameter is obtained
Figure GDA00025549783400000310
Otherwise, returning to the step 6.3 for execution;
7, according to the optimal prediction preference model parameter
Figure GDA00025549783400000311
Predicting a preference set of the user u for all the item images by using the formula (5)
Figure GDA00025549783400000312
Figure GDA00025549783400000313
In the formula (5), the reaction mixture is,
Figure GDA00025549783400000314
denotes the α thmaxThe sub-cycle transforms the matrix,
Figure GDA00025549783400000315
denotes the α thmaxPotential feature vectors of user u in the secondary loop;
step 8, collecting the preference of the user u to all the article images
Figure GDA00025549783400000316
The preference values in the list are sorted in descending order, and the item images corresponding to the top-to preference values are selected and recommended to the user u.
Compared with the prior art, the invention has the beneficial effects that:
1. the image visual characteristics are merged into a matrix decomposition formula, the convolutional neural network CNN is used for extracting the image visual characteristics, a prediction preference model is established by using matrix decomposition in collaborative filtering, and the matrix decomposition is carried out by using an element-based alternating least square method, so that the recommendation precision of an image recommendation system is improved and personalized recommendation is realized.
2. The method utilizes the convolutional neural network CNN to extract the characteristics of the images in the image set, and uses the recommendation method based on the image visual characteristics, thereby effectively solving the problems that the traditional text-based recommendation method is difficult to completely describe all the characteristics of the commodity and can be influenced by the factors of the user.
3. The method utilizes matrix decomposition in collaborative filtering to establish a prediction preference model, and a collaborative filtering algorithm does not depend on the content characteristics of the recommendation information but depends on the behavior characteristics of the user more, so that the application range is wider.
4. The method updates the prediction preference model by using the element-based alternating least square method, and has the advantages of low time complexity, good convergence effect and the like compared with the traditional alternating least square method.
Drawings
FIG. 1 is an overall flow chart of the present invention.
Detailed Description
In this embodiment, an image recommendation method fusing visual features and user scores includes: crawling a data set, extracting an article image and a scoring matrix of a user for the corresponding article image from the data set, performing image visual feature extraction on the acquired article image by using a convolutional neural network to obtain a visual feature matrix, establishing a prediction preference model, updating the prediction preference model by using an element-based alternating least square method, obtaining preference values of the user for all article images from a final prediction preference model, and completing image recommendation. The whole process is shown in figure 1, and concretely, the method comprises the following steps
Step 1, crawling an article image set P and a corresponding article scoring data set Q from a website through a web crawler;
step 1.1, initializing a URL list;
step 1.2, calling API to obtain a large amount of commodity information stored in an XML format;
step 1.3, analyzing to obtain an XML file, obtaining a seed list, and returning an analysis return result to be stored in a warehouse;
step 1.4, after the seed list of the commodity name is obtained, screening and duplicate removal operation is carried out on the obtained list;
and 1.5, if the URL list needs to be expanded, continuing to execute the step 1.2, otherwise, obtaining an item image set P and a corresponding item scoring data set Q.
Step 2, extracting N article images from the article image set P, and extracting evaluation information of M users on the N article images from the corresponding article scoring data set Q, thereby obtaining a scoring matrix of the M users on the N article images
Figure GDA0002554978340000041
And the score of any user u on any item image i in the scoring matrix Y is recorded as YuiIf y isui1 denotes that the user u evaluated the item corresponding to the item image i, and yui0 indicates that the user u does not evaluate the item corresponding to the item image i;
step 3, carrying out normalization processing on the N article images to obtain an image set C;
step 4, respectively extracting the characteristics of the N images in the image set C by using a convolutional neural network CNN to obtain a visual characteristic matrix of the N images
Figure GDA0002554978340000051
Where T represents the dimension of the visual feature of each image, each column vector
Figure GDA0002554978340000052
Representing a visual feature vector corresponding to the image i;
and 5, establishing a prediction preference model by using the formula (1):
Figure GDA0002554978340000053
formula (A), (B) and1) in (1),
Figure GDA0002554978340000054
representing potential feature vectors for user u, K representing potential feature dimensions,
Figure GDA0002554978340000055
representing a transformation matrix for transforming the visual feature vector f of the image iiConverting into an embedded vector; efiA potential feature vector representing the image i,
Figure GDA0002554978340000056
representing predicted user u preferences for image i;
step 6, updating the prediction preference model by using an element-based alternating least square method;
step 6.1, obtaining a loss function L by using the formula (2):
Figure GDA0002554978340000057
in the formula (2), Y represents the set of item images evaluated in the evaluation matrix Y, and wuiRepresenting the weight of any user u scoring any item image i in the scoring matrix Y,
Figure GDA0002554978340000058
after the k-th row vector in the transformation matrix E is removed, the preference of any user u to any article image i is shown, and
Figure GDA0002554978340000059
pukpotential feature vector p representing user uuThe k-th dimension value of (1); c. CiRepresenting the weights of the item images i that were not evaluated in the scoring matrix Y, lambda represents the parameters of the L2 regularization,
Figure GDA00025549783400000510
representing the kth row vector in the transformation matrix E;
wherein formula (2) is derived from formula (1) by substituting the following formula:
Figure GDA00025549783400000511
in the above equation, the first term of the loss function L
Figure GDA00025549783400000512
A loss function value representing a set of images of the evaluated object, a second term
Figure GDA00025549783400000513
A loss function value representing a set of images of the article that have not been evaluated, a third term
Figure GDA00025549783400000514
An L2 regularization term representing a prediction preference model;
step 6.2, defining a loop variable of α and initializing α to 0, defining a maximum loop number of αmaxRandomly initializing the parameters of the prediction preference model of the α th cycle by using the standard normal distribution
Figure GDA0002554978340000061
Wherein,
Figure GDA0002554978340000062
potential feature vector, E, representing user u in the α th loopαA transformation matrix representing the α th cycle;
step 6.3, updating the potential feature vector p of the user u in the α th circulation by using the formula (3)uK-th dimension value of
Figure GDA0002554978340000063
Figure GDA0002554978340000064
In the formula (3), the reaction mixture is,
Figure GDA0002554978340000065
indicating the α th cycle in the transformation matrix Ek line vectors, yuRepresenting a set of item images evaluated by user u in a scoring matrix Y;
wherein formula (3) is represented by formula (2) to pukDerivation, making derivatives
Figure GDA0002554978340000066
Thus obtaining the product.
Step 6.4, adopting an element-by-element updating strategy, and updating the α th cycle transformation matrix E by using the formula (4)αJ-dimension value of k-th row vector
Figure GDA0002554978340000067
Figure GDA0002554978340000068
In the formula (4), fijThe j-th dimension of the feature value of the item image i in the visual feature matrix F,
Figure GDA0002554978340000069
denotes the transformation matrix E at the α th cycleαIn the k-th row vector, eliminating the potential characteristic value of the j-th dimension value of the article image i in the visual characteristic matrix F;
Figure GDA00025549783400000610
after the k-th row vector in the transformation matrix E is removed in the α th cycle, the preference of any user u on any article image i;
the derivation process of formula (4) is as follows:
the following formula is defined:
Figure GDA00025549783400000611
in the above formula, EktRepresenting the t-dimensional value of the k-th row of the E matrix, EkjDenotes the j-th dimension value, f, of the k-th row of the E matrixitRepresenting the t-dimensional value, F, of the ith row of the F matrixijRepresents the j-th dimension value of the i-th row of the F matrix. According to the above definition, the formula for rewriting L is:
Figure GDA00025549783400000612
Figure GDA0002554978340000071
to EkjTaking the derivative, we can get:
Figure GDA0002554978340000072
order to
Figure GDA0002554978340000073
Thus, formula (4) can be obtained.
Step 6.5, assigning α +1 to α, and judging α > αmaxWhether the optimal prediction preference model parameter is obtained or not is judged, if yes, the optimal prediction preference model parameter is obtained
Figure GDA0002554978340000074
Otherwise, returning to the step 6.3 for execution;
7, according to the optimal prediction preference model parameters
Figure GDA0002554978340000075
Predicting a preference set of the user u for all the item images by using the formula (5)
Figure GDA0002554978340000076
Figure GDA0002554978340000077
In the formula (5), the reaction mixture is,
Figure GDA0002554978340000078
denotes the α thmaxThe sub-cycle transforms the matrix,
Figure GDA0002554978340000079
denotes the α thmaxPotential feature vectors of user u in the secondary loop;
step 8, collecting the preference of the user u to all the article images
Figure GDA00025549783400000710
The preference values in the list are sorted in descending order, and the item images corresponding to the top-to preference values are selected and recommended to the user u.

Claims (1)

1. An image recommendation method integrating visual features and user scores is characterized by comprising the following steps:
step 1, crawling an article image set P and a corresponding article scoring data set Q from a website through a web crawler;
step 2, extracting N article images from the article image set P, and extracting evaluation information of M users on the N article images from the corresponding article scoring data set Q, thereby obtaining a scoring matrix of the M users on the N article images
Figure FDA0002554978330000011
And the score of any user u on any item image i in the scoring matrix Y is recorded as YuiIf y isui1 denotes that the user u evaluated the item corresponding to the item image i, and yui0 indicates that the user u does not evaluate the item corresponding to the item image i;
step 3, carrying out normalization processing on the N article images to obtain an image set C;
step 4, respectively extracting the characteristics of the N images in the image set C by using a convolutional neural network CNN to obtain visual characteristic matrixes of the N images
Figure FDA0002554978330000012
Where T represents the dimension of the visual feature of each image, each column vector
Figure FDA0002554978330000013
Representing a visual feature vector corresponding to the image i;
and 5, establishing a prediction preference model by using the formula (1):
Figure FDA0002554978330000014
in the formula (1), the reaction mixture is,
Figure FDA0002554978330000015
representing potential feature vectors for user u, K representing potential feature dimensions,
Figure FDA0002554978330000016
representing a transformation matrix for transforming the visual feature vector f of the image iiConverting into an embedded vector; efiA potential feature vector representing the image i,
Figure FDA0002554978330000017
representing predicted user u preferences for image i;
step 6, updating the prediction preference model by using an element-based alternating least square method;
step 6.1, obtaining a loss function L by using the formula (2):
Figure FDA0002554978330000018
in the formula (2), Y represents the set of item images evaluated in the evaluation matrix Y, and wuiRepresenting the weight of any user u scoring any item image i in the scoring matrix Y,
Figure FDA0002554978330000019
after the k-th row vector in the transformation matrix E is removed, the preference of any user u to any article image i is shown, and
Figure FDA00025549783300000110
pukpotential feature vector p representing user uuThe k-th dimension value of (1); c. CiIndicates that there is no score in the scoring matrix YThe weight of the evaluated item image i, λ represents the regularization parameter of L2,
Figure FDA00025549783300000111
representing the kth row vector in the transformation matrix E;
step 6.2, defining a loop variable of α and initializing α to 0, defining a maximum loop number of αmaxRandomly initializing the parameters of the prediction preference model of the α th cycle by using the standard normal distribution
Figure FDA00025549783300000112
Wherein,
Figure FDA00025549783300000113
potential feature vector, E, representing user u in the α th loopαA transformation matrix representing the α th cycle;
step 6.3, updating the potential feature vector p of the user u in the α th circulation by using the formula (3)uK-th dimension value of
Figure FDA0002554978330000021
Figure FDA0002554978330000022
In the formula (3), the reaction mixture is,
Figure FDA0002554978330000023
denotes the k row vector, y, in the transformation matrix E at the α th cycleuRepresenting a set of item images evaluated by user u in a scoring matrix Y;
step 6.4, adopting an element-by-element updating strategy, and updating the α th cycle transformation matrix E by using the formula (4)αJ-dimension value of k-th row vector
Figure FDA0002554978330000024
Figure FDA0002554978330000025
In the formula (4), fijThe j-th dimension of the feature value of the item image i in the visual feature matrix F,
Figure FDA0002554978330000026
denotes the transformation matrix E at the α th cycleαIn the k-th row vector, eliminating the potential characteristic value of the j-th dimension value of the article image i in the visual characteristic matrix F;
Figure FDA0002554978330000027
after the k-th row vector in the transformation matrix E is removed in the α th cycle, the preference of any user u on any article image i;
step 6.5, assigning α +1 to α, and judging α > αmaxWhether the optimal prediction preference model parameter is obtained or not is judged, if yes, the optimal prediction preference model parameter is obtained
Figure FDA0002554978330000028
Otherwise, returning to the step 6.3 for execution;
7, according to the optimal prediction preference model parameter
Figure FDA0002554978330000029
Predicting a preference set of the user u for all the item images by using the formula (5)
Figure FDA00025549783300000210
Figure FDA00025549783300000211
In the formula (5), the reaction mixture is,
Figure FDA00025549783300000212
denotes the α thmaxThe sub-cycle transforms the matrix,
Figure FDA00025549783300000213
denotes the α thmaxPotential feature vectors of user u in the secondary loop;
step 8, collecting the preference of the user u to all the article images
Figure FDA00025549783300000214
The preference values in the list are sorted in descending order, and the item images corresponding to the top-to preference values are selected and recommended to the user u.
CN201711330059.7A 2017-12-13 2017-12-13 Image recommendation method integrating visual features and user scores Active CN107944035B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711330059.7A CN107944035B (en) 2017-12-13 2017-12-13 Image recommendation method integrating visual features and user scores

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711330059.7A CN107944035B (en) 2017-12-13 2017-12-13 Image recommendation method integrating visual features and user scores

Publications (2)

Publication Number Publication Date
CN107944035A CN107944035A (en) 2018-04-20
CN107944035B true CN107944035B (en) 2020-10-13

Family

ID=61943060

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711330059.7A Active CN107944035B (en) 2017-12-13 2017-12-13 Image recommendation method integrating visual features and user scores

Country Status (1)

Country Link
CN (1) CN107944035B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108874914B (en) * 2018-05-29 2021-11-02 吉林大学 Information recommendation method based on graph convolution and neural collaborative filtering
CN110555161A (en) * 2018-05-30 2019-12-10 河南理工大学 personalized recommendation method based on user trust and convolutional neural network
CN108959429B (en) * 2018-06-11 2022-09-09 苏州大学 Method and system for recommending movie integrating visual features for end-to-end training
CN108985899B (en) * 2018-07-13 2022-04-22 合肥工业大学 Recommendation method, system and storage medium based on CNN-LFM model
CN109102127B (en) * 2018-08-31 2021-10-26 杭州贝购科技有限公司 Commodity recommendation method and device
CN109522950B (en) * 2018-11-09 2022-04-22 网易传媒科技(北京)有限公司 Image scoring model training method and device and image scoring method and device
CN109978836B (en) * 2019-03-06 2021-01-19 华南理工大学 User personalized image aesthetic feeling evaluation method, system, medium and equipment based on meta learning
CN111800569B (en) * 2019-04-09 2022-02-22 Oppo广东移动通信有限公司 Photographing processing method and device, storage medium and electronic equipment
CN110059262B (en) * 2019-04-19 2021-07-02 武汉大学 Project recommendation model construction method and device based on hybrid neural network and project recommendation method
CN112862538A (en) * 2021-03-02 2021-05-28 中国工商银行股份有限公司 Method, apparatus, electronic device, and medium for predicting user preference
CN113362131B (en) * 2021-06-02 2022-09-13 合肥工业大学 Intelligent commodity recommendation method based on map model and integrating knowledge map and user interaction

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102722842A (en) * 2012-06-11 2012-10-10 姚明东 Commodity recommendation optimizing method based on customer behavior
US9449221B2 (en) * 2014-03-25 2016-09-20 Wipro Limited System and method for determining the characteristics of human personality and providing real-time recommendations
CN104298787A (en) * 2014-11-13 2015-01-21 吴健 Individual recommendation method and device based on fusion strategy
CN105740444A (en) * 2016-02-02 2016-07-06 桂林电子科技大学 User score-based project recommendation method
CN107341204B (en) * 2017-06-22 2023-04-07 电子科技大学 Collaborative filtering recommendation method and system fusing article label information

Also Published As

Publication number Publication date
CN107944035A (en) 2018-04-20

Similar Documents

Publication Publication Date Title
CN107944035B (en) Image recommendation method integrating visual features and user scores
CN105426528B (en) A kind of retrieval ordering method and system of commodity data
CN111797321B (en) Personalized knowledge recommendation method and system for different scenes
CN108665323B (en) Integration method for financial product recommendation system
CN103544216B (en) The information recommendation method and system of a kind of combination picture material and keyword
CN104424296B (en) Query word sorting technique and device
CN110020128B (en) Search result ordering method and device
US20170124618A1 (en) Methods and Systems for Image-Based Searching of Product Inventory
CN110175895B (en) Article recommendation method and device
CN107683469A (en) A kind of product classification method and device based on deep learning
CN110674407A (en) Hybrid recommendation method based on graph convolution neural network
CN104268142B (en) Based on the Meta Search Engine result ordering method for being rejected by strategy
CN104881798A (en) Device and method for personalized search based on commodity image features
CN105975596A (en) Query expansion method and system of search engine
CN113850649A (en) Customized recommendation method and recommendation system based on multi-platform user data
WO2019011936A1 (en) Method for evaluating an image
CN107622071B (en) Clothes image retrieval system and method under non-source-retrieval condition through indirect correlation feedback
CN115248876B (en) Remote sensing image overall recommendation method based on content understanding
CN116541607B (en) Intelligent recommendation method based on commodity retrieval data analysis
CN111858972A (en) Movie recommendation method based on family knowledge graph
CN115712780A (en) Information pushing method and device based on cloud computing and big data
CN107169830B (en) Personalized recommendation method based on clustering PU matrix decomposition
CN108389113B (en) Collaborative filtering recommendation method and system
CN112989215B (en) Sparse user behavior data-based knowledge graph enhanced recommendation system
CN106407281B (en) Image retrieval method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200908

Address after: 230051 405, building 2, Binjiang Huayue Baihe garden, Baohe District, Hefei City, Anhui Province

Applicant after: Hefei weimubingzhi Technology Co., Ltd

Address before: Tunxi road in Baohe District of Hefei city of Anhui Province, No. 193 230009

Applicant before: Hefei University of Technology

GR01 Patent grant
GR01 Patent grant
CP02 Change in the address of a patent holder
CP02 Change in the address of a patent holder

Address after: 238014 Room 202, East Building, panchao Internet Culture Industrial Park, 1 Jinchao Avenue, Chaohu Economic Development Zone, Hefei City, Anhui Province

Patentee after: Hefei weimubingzhi Technology Co., Ltd

Address before: 230051 Room 405, building 2, Binjiang Huayue Baihe garden, Baohe District, Hefei City, Anhui Province

Patentee before: Hefei weimubingzhi Technology Co., Ltd