CN108509534B - Personalized music recommendation system based on deep learning and implementation method thereof - Google Patents
Personalized music recommendation system based on deep learning and implementation method thereof Download PDFInfo
- Publication number
- CN108509534B CN108509534B CN201810213304.4A CN201810213304A CN108509534B CN 108509534 B CN108509534 B CN 108509534B CN 201810213304 A CN201810213304 A CN 201810213304A CN 108509534 B CN108509534 B CN 108509534B
- Authority
- CN
- China
- Prior art keywords
- song
- user
- songs
- listening
- group
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a personalized music recommendation system based on deep learning and an implementation method thereof, wherein the system comprises a user song listening recording system, a song recall model and a Deep Neural Network (DNN) filtering model; the user song listening recording system allocates a globally unique account for each user to record user information and user operation records; the song recall model screens out song groups which are possibly interested by the user from a massive song library according to the song listening records of the user; the training and testing of the Deep Neural Network (DNN) filtering model depends on a user song listening recording system, the trained deep neural network is used for filtering song groups recommended by the song recall model, and the song groups are filtered and sorted by the deep neural network filtering model to finally generate song combinations recommended to the user. The system and the method can filter songs which are not interesting to the user, reduce the calculated amount and the calculated resources and realize real-time accurate recommendation.
Description
Technical Field
The invention relates to the technical field of machine learning deep learning, in particular to a personalized music recommendation system based on deep learning and an implementation method thereof.
Background
In recent years, the recommendation system has developed very much fire, and in the present day of explosive growth of information, the recommendation system can accurately link users and articles together from surplus information. The personalized music recommendation system based on deep learning utilizes a deep neural network to learn massive user song listening data, reduces manual intervention on characteristics, automatically learns user characteristics, song characteristics and scene characteristics, and combines linearity and nonlinearity to obtain a deep learning model. And then inputting the recalled song list and the corresponding characteristic information into a model to obtain the probability of predicting whether the song is liked or cut, and finally recommending a batch of personalized recommended songs to the user after filtering and sequencing according to the probability of the song being liked or cut.
The traditional personalized music recommendation system needs to manually design and extract the features, song features and scene features of a user, combines the features by designing simple linear rules, does not consider the nonlinear connection among the features, finds the deep connection among the features by introducing a deep neural network model, automatically learns the features of the user, the song features and the scene features by utilizing the deep neural network and introducing nonlinear factors, saves the labor cost, and simultaneously improves the intelligence and the accuracy of the recommendation system.
Disclosure of Invention
The invention aims to overcome the defects and shortcomings in the prior art and provides a personalized music recommendation system based on deep learning and an implementation method thereof.
In order to realize the purpose, the invention adopts the following technical scheme:
the personalized music recommendation system based on deep learning comprises a user song listening recording system, a song recall model and a deep neural network filtering model;
the user song listening recording system allocates a globally unique account for each user, is used for recording user information and user operation records, and obtains user portrait characteristics and user song listening pipelining;
the song recall model constructs a recall song group according to a user song listening recording system and a massive song library, wherein the recall song group comprises a user portrait song group and a user song listening running water similar song group;
according to a user song listening recording system, the deep neural network filtering model extracts all user song listening stream records from a user song listening stream, and divides the records into a training set and a test set, wherein the training set is used for training the deep neural network filtering model, and the test set is used for testing the accuracy of the model; and the trained deep neural network filtering model is used for filtering the recall song group recommended by the song recall model, and finally generating a song combination recommended to the user after filtering and sorting by the deep neural network filtering model.
Preferably, the user profile information, i.e., the user profile characteristics, includes: the age of the user, the gender of the user, the occupation of the user, the region where the user is located, the singer preferred by the user, the singer not preferred by the user, the genre of songs not preferred by the user, the language of songs preferred by the user, and the language of songs not preferred by the user;
the user operation record, namely the song listening stream of the user, comprises: songs liked by the user, songs shared by the user, songs downloaded by the user, songs completely listened to by the user, songs commented by the user, songs set as background music by the user, songs lost by the user in a trash can, songs cut by the user, songs recently listened by the user, songs derived from a source song to be liked, songs derived from a source song to be completely listened to by the source song, songs derived from a source song to be cut, songs derived from a source song to be lost in a trash can, and songs derived from a source song to be downloaded; the source song is a song on which similar songs are calculated.
As a preferred technical scheme, for a user portrait song group, the song recall model is used for counting the song group of each user portrait characteristic from a massive song library by utilizing a statistical principle based on the user portrait characteristics, and screening the user portrait song group from the song library according to the existing user portrait characteristics of the user;
for a song group similar to the running water of the songs listened by the user, the song recall model maintains mapping from the running water of the songs listened by the user to a candidate song group according to different user operation records, the candidate song group selects within ten thousand songs from a massive song library at the ten-million level as a song group recommended to the user, then similar songs are calculated according to song lists in different running water operation lists of the target user, and a similar song combination is maintained to construct the song group similar to the running water of the songs listened by the user.
As a preferred technical scheme, the deep neural network filtering model is built by adopting a tensoflow frame; and inputting a recalled song group recommended by the song recall model, song characteristics and user portrait characteristics into a deep neural network filtering model for filtering, predicting the song cutting probability or the song liked probability by adopting the deep neural network filtering model in a filtering mode, sequencing according to the probability, filtering out some songs, sequencing again, and recommending an individualized song group to the user.
The implementation method of the personalized music recommendation system based on deep learning comprises the following steps:
s1, acquiring portrait characteristics of the user and song listening stream of the user according to the user information and user operation records recorded by the song listening recording system of the user; the user song listening recording system sets the user portrait characteristics which cannot be acquired to be null, and records the real values of the user portrait characteristics which can be acquired;
s2, selecting the user portrait song group: counting song combinations corresponding to the existing user portrait characteristics from the song listening stream of the user, and maintaining a favorite song list according to different dimensions of each user portrait characteristic; for a new user, dimension mapping from already counted user profile features to a set of favorite song combinations based on the user profile features already existing for the user;
s3, selecting a group of songs similar to the stream of the user' S listening songs:
s31, selecting five groups of songs based on the song listening stream of the user, wherein the five groups of songs comprise songs liked by the user, and the statistical number of the songs is recorded as n1Completely listening to songs and counting the number to record as n2Downloading songs and counting the number to record as n3Sharing songs and counting the number of the songs as n4And setting background music songs and counting the number to record as n5If the total number of songs recorded in the user's song listening stream data is n ═ n1+n2+n3+n4+n5;
S32, setting weight coefficients for each group of song groups to reflect the song listening preference of the user, and constructing song listening running water song groups; after obtaining the song listening running water song groups, calculating the song listening running water similar song groups according to a filtering mode, namely maintaining a mapping F from the song listening running water song groups to the song listening running water similar song groups, wherein the F is a rule function for calculating the similarity of songs, and the mapping F calculates the similar songs of the song listening running water song groups and arranges the similar songs from high to low according to the similarity;
s4, constructing a deep neural network filtering model, and training and testing; extracting all the running records of the singing of the user from the running water of the singing of the user, dividing the running records into a training set and a test set, wherein the training set is used for training a deep neural network filtering model, and the test set is used for testing the accuracy of the model;
s5, preprocessing the recalled song group, and inputting the preprocessed recalled song group into the trained deep neural network filtering model; the recall song group comprises a user portrait song group and a user song listening running water similar song group;
s51, preprocessing: preliminarily filtering the recalled song group, and filtering out songs which are possibly illegal and unnecessary to recommend;
s52, inputting the preprocessed songs into a trained deep neural network filtering model, wherein the deep neural network filtering model predicts and outputs the probability that each song is liked or cut, and songs with liked probability lower than a custom threshold or with cut probability higher than the custom threshold are filtered; the song input into the deep neural network model has four kinds of characteristic information and combination characteristics thereof, wherein the four kinds of characteristic information are scene characteristic information, song characteristic information, source song characteristic information and user characteristic information; the combination characteristics are counted out through the song listening stream of the user;
and S53, after the probability of being liked or cut is predicted by the recalled song group through the deep neural network filtering model, a batch of songs are filtered according to a custom threshold value, and then the filtered songs are sorted.
Compared with the prior art, the invention has the following advantages and effects:
(1) the system of the invention designs a precise user song listening recording system, records user information and song listening pipelining operation, and provides a data support source for subsequent steps;
(2) according to the system, the accurate song recall model is designed, and song groups which are possibly interested by the user can be recalled quickly and accurately from a massive song library in ten-million levels on the basis of the data of the song listening recording system of the user, so that the calculated amount of a deep neural network model is reduced, the operation time is shortened, and quick real-time recommendation is realized;
(3) the system and the method input the recall song group recommended by the song recall model into a trained Deep Neural Network (DNN) filtering model, and finally pick out the song most suitable for being recommended to the user from the candidate song group. Selecting the running water data of a plurality of users from the running water of the songs listened by the users to train the Deep Neural Network (DNN) filtering model, wherein the running water data of the songs listened by the users comprises the characteristic information of the users and the characteristic information of the corresponding songs, the deep neural network automatically learns the characteristic information of the users and the songs and combines the characteristics of the users and the songs linearly and nonlinearly, the model is converged through continuous iterative training, and then a part of data is extracted from the running water of the songs listened by the users to verify the accuracy of the model, so that the Deep Neural Network (DNN) filtering model with high robustness is finally obtained.
Drawings
FIG. 1 is a schematic flow chart of a deep learning-based personalized music recommendation system according to the present invention;
FIG. 2 is a diagram of an exemplary system for recording songs listened by a user;
FIG. 3 is an exemplary diagram of a song recall model of the present invention;
FIG. 4 is a diagram illustrating an example of a user profile feature song set generation process of the present invention;
FIG. 5 is a diagram illustrating an exemplary process of generating song groups similar to the running water for listening to songs according to the present invention;
FIG. 6 is an exemplary diagram of a Deep Neural Network (DNN) filtering model of the present invention.
Detailed Description
The invention is described in further detail below with reference to the figures and specific embodiments.
Examples
As shown in fig. 1, the personalized music recommendation system based on deep learning comprises a user song listening recording system, a song recall model and a deep neural network filtering model;
the user song listening recording system allocates a globally unique account for each user, is used for recording user information and user operation records, and obtains user portrait characteristics and user song listening streamline, and is shown in fig. 2;
the song recall model constructs a recall song group according to a user song listening recording system and a massive song library, wherein the recall song group comprises a user portrait song group and a user song listening stream similar song group, and is shown in figure 3;
according to a user song listening recording system, the deep neural network filtering model extracts all user song listening stream records from a user song listening stream, and divides the records into a training set and a test set, wherein the training set is used for training the deep neural network filtering model, and the test set is used for testing the accuracy of the model; the trained deep neural network filtering model is used for filtering the recall song group recommended by the song recall model, and finally generating a song combination recommended to a user after filtering and sorting by the deep neural network filtering model;
in this example, the information of the user, i.e. the user portrait characteristics, includes: the age of the user, the gender of the user, the occupation of the user, the region where the user is located, the singer preferred by the user, the singer not preferred by the user, the genre of songs not preferred by the user, the language of songs preferred by the user, and the language of songs not preferred by the user;
the user operation record, namely the song listening stream of the user, comprises: songs liked by a user, songs shared by the user, songs downloaded by the user, songs completely listened to by the user, songs commented by the user, songs set as background music by the user, songs lost by the user, songs cut by the user, songs recently listened by the user, songs derived by a source song completely listened to by the source song, songs derived by a source song lost by a garbage bin, and downloaded songs derived by a source song, wherein the source songs are songs on which similar songs are calculated;
in the embodiment, for a user portrait song group, the song recall model is used for counting the song group of each user portrait characteristic from a massive song library by using a statistical principle based on the user portrait characteristics, and screening the user portrait song group from the song library according to the existing user portrait characteristics of the user;
for a song group similar to the running water of the songs listened by the user, the song recall model maintains mapping from the running water of the songs listened by the user to a candidate song group according to different user operation records, the candidate song group selects within ten thousand songs from a massive song library at the ten-million level as a song group recommended to the user, then similar songs are calculated according to song lists in different running water operation lists of the target user, and a similar song combination is maintained to construct the song group similar to the running water of the songs listened by the user.
In this embodiment, the deep neural network filtering model is built by adopting a tensoflow frame; and inputting a recalled song group recommended by the song recall model, song characteristics and user portrait characteristics into a deep neural network filtering model for filtering, predicting the song cutting probability or the song liked probability by adopting the deep neural network filtering model in a filtering mode, sequencing according to the probability, filtering out some songs, sequencing again, and recommending an individualized song group to the user.
In this embodiment, the implementation method of the personalized music recommendation system based on deep learning includes the following steps:
s1, acquiring portrait characteristics of the user and song listening stream of the user according to the user information and user operation records recorded by the song listening recording system of the user; the user song listening recording system sets the user portrait characteristics which cannot be acquired to be null, and records the real values of the user portrait characteristics which can be acquired;
s2, selecting the user portrait song group: firstly, the song combination corresponding to the existing user portrait characteristics is counted from the song listening stream of the user, a favorite song list is maintained according to different dimensions of each user portrait characteristic, for example, the age characteristic of the user can be divided into several dimensions: counting and maintaining a song list liked to be listened by teenagers, a song list liked to be listened by the middle-aged and a song list liked to be listened by the old from song listening streaming water; for a new user, mapping the statistical characteristic dimension to a group of favorite song combinations based on the existing user portrait characteristics of the user, namely as shown in fig. 4, obtaining a song group 1 according to the characteristics 1 of the user, obtaining a song group 2 according to the characteristics 2 of the user, and so on, obtaining a song group m according to the characteristics m of the user, and finally obtaining a user portrait characteristic song group by finally fusing the song groups according to a certain proportion principle r, wherein the proportion r is obtained by engineering experience, and the proportion coefficient r corresponding to each characteristic is obtained by the proportion coefficient occupied by the characteristic in actual application and is generally a fixed value;
s3, as shown in fig. 5, the method for selecting the group of songs similar to the flow of songs listened to by the user includes the following steps:
s31, selecting five groups of songs based on the song listening stream of the user, wherein the five groups of songs comprise songs liked by the user, and the statistical number of the songs is recorded as n1Completely listening to songs and counting the number to record as n2Downloading songs and counting the number to record as n3Sharing songs and counting the number of the songs as n4And setting background music songs and counting the number to record as n5If the total number of songs recorded in the user's song listening stream data is n ═ n1+n2+n3+n4+n5;
S32, setting weight coefficients for each group of song groups to reflect the song listening preference of the user, and constructing song listening running water song groups; after obtaining the song listening running water song groups, calculating the song listening running water similar song groups according to a filtering mode, namely maintaining a mapping F from the song listening running water song groups to the song listening running water similar song groups, wherein the F is a rule function for calculating the similarity of songs, and the mapping F calculates the similar songs of the song listening running water song groups and arranges the similar songs from high to low according to the similarity; step S32 specifically includes the following steps:
s321, constructing a song listening stream song group:assuming a weight coefficient kiRepresenting the proportion of the number of songs taken from the corresponding song group, and setting the song k liked by the user because the song liked by the user and the song listened to the song completely can reflect the song listening preference of the user11, listen to song k in its entirety21 represents that the user likes the songs and the user completely listens to the songs and all the songs are grouped into a song listening running group;
for downloaded song k3Share song k4And setting a background music song k5: when n is 300 or less, k3=k4=k5If the song is 1, all the downloaded songs, the shared songs and the set background music songs are classified into a song listening running song group; when n is1+n2When the value of (A) is 300 or more, k3=k4=k5If 0, the downloaded song, the shared song and the set background music song are not used; when n is1+n2When the value of (a) is less than 300,i=3,4,5,nirepresents n3,n4,n5Then from n3,n4,n5Selecting and supplementing 300 songs in proportion, and selecting the songs in sequence from high to low according to the principle of extraction and supplementation according to the operation time of a user, namely extracting the recently downloaded song, the recently shared song and the song with recently set background music in proportion and supplementing the songs into a song group listening to the songs in running water;
s322, after obtaining the song listening running water song group, calculating the song listening running water similar song group according to the item _ cf collaborative filtering mode, wherein the method for calculating the song similarity is carried out according to the following steps:
s3221, counting songs liked by each user based on the song listening pipelining, and maintaining a mapping of Uin to song _ list, wherein the Uin represents a globally unique id of the user, and the song _ list represents a list of songs liked by the user, namely counting which songs are liked by each user;
s3222, based on the statistics of the step S3221, obtaining mapping of song- - > Uin _ list by using a transposition matrix inversion method, wherein song represents id of each song, Uin _ list represents users who like the song, namely, statistics is carried out on which users like each song;
s3223, calculating similarity of the two songs songidi and songidj, wherein the specific formula is as follows:
wherein Uin _ list (i) represents the number of users who like song i, Uin _ list (j) represents the number of users who like song j, | Uin _ list (i) and | Uin _ list (j) represent the number of users who like both song i and song j; the similarity between any two songs in the song library is obtained by the formula;
s4, constructing a deep neural network filtering model by adopting a tenserflow framework, then extracting all the song listening stream records of the user from the song listening stream of the user, and according to a training set: dividing the test set into a training set and a test set according to the ratio of 3:1, wherein the training set is used for training a deep neural network filtering model, and the test set is used for testing the accuracy of the model; the user song listening stream record comprises user characteristic information and corresponding song characteristic information, the deep neural network filtering model automatically learns the characteristic information of the user and the song and combines the characteristics of the user and the song in a linear and nonlinear way, and the model is converged through continuous iterative training;
s5, as shown in FIG. 6, preprocessing the recalled song group, inputting the preprocessed recalled song group into the trained deep neural network filtering model for filtering, and sequencing the filtered songs to finally generate a song group recommended to the user; the method specifically comprises the following steps:
s51, preprocessing: the method comprises the following steps that similar songs in a song group which listens to songs and runs in a running water mode may relate to some illegal songs due to similarity calculation, the song group is preliminarily filtered, and illegal and unnecessary recommended songs are filtered; the user blacklists songs or singers, namely the songs or singers which are definitely disliked by the user and are lost into the garbage can, are filtered; the rules for filtering are not limited to the following: system blacklisting of songs, manually maintaining a list of songs deemed unsuitable for pushing to a user, including interesting vulgar songs, songs with advertisements, and songs or accompaniments related to religion, etc.; similar songs in different languages, namely, similar songs are filtered out when the source song and the similar songs are not in the same language; the songs liked by the user, the songs shared by the user, the songs completely listened to by the user, the songs with background music set by the user and the songs of 2000 songs listened to by the user recently are filtered out, namely the songs listened to by the user do not need to be recommended in a personalized way;
s52, inputting the preprocessed songs into a trained deep neural network filtering model, wherein the deep neural network filtering model predicts and outputs the probability that each song is liked or cut, and songs with liked probability lower than a custom threshold or with cut probability higher than the custom threshold are filtered; the song input into the deep neural network model has four kinds of characteristic information and combination characteristics thereof, wherein the four kinds of characteristic information are scene characteristic information, song characteristic information, source song characteristic information and user characteristic information; the combination characteristics are counted out through the song listening stream of the user;
and then integrating the combined features of each song into a one-dimensional column vector, inputting the one-dimensional column vector into a deep neural network filtering model, automatically learning scene feature information, song feature information, source song feature information, user feature information and linear and nonlinear relations among the four kinds of feature information by the model, and predicting the probability that the current song is liked by the user or the probability that the song is cut by the user according to the four kinds of feature information.
S53, after the probability of being liked or cut is predicted by the recalled song group through the deep neural network filtering model, a batch of songs are filtered according to the custom threshold value, and then the filtered songs are sorted, wherein the sorting steps are as follows:
s531, arranging the song groups from high to low according to probability values of songs possibly liked by the user;
s532, in order to avoid always recommending songs of the same singer, the singers need to be scattered, the scattering step length is set to be S, the adjacent recommended S songs cannot have the same singer, and the favorite songs of the same singer with low probability are skipped in the step length S;
s533, for enriching the recommended style, inserting songs with liked probability in advance, setting the number of recommended songs to be M, sorting M × 2 songs from high to low according to the liked probability, counting the language proportion, the genre proportion, the year proportion and the rhythm proportion in the front M songs, and placing songs with different languages, or different genres, or different years or different rhythms in the back M songs from the custom threshold value.
In this embodiment, the combined features in step S52 include the following features:
current year, current month, current time, current location (obtained from ip), current weather, current temperature;
whether the song is live edition, song playing time, song heat value (how many people collect), song genre, song language, song age, song rhythm and song singer;
whether the source song is live edition or not, the playing time of the source song, the time length of the source song, the heat value (how many people to collect) of the source song, the genre of the source song, the language type of the source song, the year, the rhythm and the singer of the source song;
the method comprises the following steps that a source song deduces song playing times, source song operating time, similarity distance, similarity, whether the source song and the similar song are the same as a singer or not, whether the source song likes the song or not, whether the source song downloads the song or not, whether the source song completely listens to the song or not, whether the source song is a background music song or not, whether the source song shares the song or not, and the using times (playing times) of the source song;
the method comprises the following steps of user asset favorite song number ratio, user asset download song number ratio, user asset share song number ratio, user asset setting background music ratio, source song being completely listened to, asset source song singer ratio, asset source song singer recent complete listened to song time, asset source song year ratio, asset source song year time, asset source song language ratio, asset source song language type recent complete listened to song time, asset source song genre ratio, asset source song genre recent complete listened to song time, asset source song rhythm ratio, asset source song rhythm recent complete listened to song time, user age and user gender;
the source song deduces the song liked to account for, the source song deduces the song completely listened to account for, the source song deduces the song cut account for, the source song deduces the song downloaded account for, the source song deduces the song lost trash can account for, the source song deduces the song shared account for, the source song deduces the song set background music account for;
the method comprises the following steps that songs pushed out by a source song singer are liked to occupy a ratio, songs pushed out by a source song singer are completely listened to occupy a ratio, songs pushed out by a source song singer are cut to occupy a ratio, songs pushed out by a source song singer are downloaded to occupy a ratio, songs pushed out by a source song singer are lost to a trash can to occupy a ratio, songs pushed out by a source song singer are shared to occupy a ratio, and songs pushed out by a source song singer are set to be background music to occupy a ratio;
the source song genre deduces that the song is liked to be shared, the source song genre deduces that the song is completely listened to be shared, the source song genre deduces that the song is cut to be shared, the source song genre deduces that the song is downloaded to be shared, the source song genre deduces that the song is lost to be shared, and the source song genre deduces that the song is set to be background music to be shared;
the method comprises the following steps of (1) obtaining a source song language type pushed song liked occupation ratio, a source song language type pushed song completely listened occupation ratio, a source song language type pushed song switched occupation ratio, a source song language type pushed song downloaded occupation ratio, a source song language type pushed song lost garbage bin occupation ratio, a source song language type pushed song shared occupation ratio and a source song language type pushed song set background music occupation ratio;
the source song year deducing song liked occupation ratio, the source song year deducing song completely listened occupation ratio, the source song year deducing song cut occupation ratio, the source song year deducing song downloaded occupation ratio, the source song year deducing song lost garbage bin occupation ratio, the source song year deducing song shared occupation ratio and the source song year deducing song set background music occupation ratio;
the source song rhythm deduces the song liked occupation ratio, the source song rhythm deduces the song completely listened occupation ratio, the source song rhythm deduces the song cut occupation ratio, the source song rhythm deduces the song downloaded occupation ratio, the source song rhythm deduces the song lost garbage bin occupation ratio, the source song rhythm deduces the song shared occupation ratio and the source song rhythm deduces the song set background music occupation ratio;
the most languages are completely listened to songs in the last period, the most languages are switched in the last period, the most favorite languages are in the last period, the most languages are downloaded in the last period, and the most languages are thrown away from the garbage can in the last period;
the genre of the last period with the most complete singing, the genre of the last period with the most liking, the genre of the last period with the most downloading and the genre of the last period with the most garbage can losing;
the singer who listens to the song completely in the last period and has the most sings, the singer who cuts the song in the last period and has the most liking in the last period, the singer who downloads the song in the last period and the singer who loses the garbage can in the last period;
the rhythm that the song was listened to completely in last cycle is the most, the rhythm that last cycle was surely the most, the rhythm that last cycle liked the most, the rhythm that last cycle downloaded the most, the rhythm that last cycle lost the garbage bin the most.
The model of the deep neural network in the embodiment is built by adopting a tensoflow frame, the setting of each layer is easy to realize according to a common method, and the innovation lies in that the model of the deep neural network is used for learning the characteristics of songs, user characteristics and behavior characteristics at the bottom layer to filter the songs and then recommend the songs.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the claims.
Claims (9)
1. An implementation method of a personalized music recommendation system based on deep learning is disclosed, wherein the personalized music recommendation system comprises a user song listening recording system, a song recall model and a deep neural network filtering model, and the method comprises the following steps:
s1, the user singing recording system sets the user portrait characteristics which cannot be obtained to be null, and records the real values of the user portrait characteristics which can be obtained;
s2, selecting the user portrait song group: counting song combinations corresponding to the existing user portrait characteristics from the song listening stream of the user, and maintaining a favorite song list according to different dimensions of each user portrait characteristic; for a new user, dimension mapping from already counted user profile features to a set of favorite song combinations based on the user profile features already existing for the user;
s3, selecting the group of similar songs from the song listening stream of the user, comprising the following steps:
s31, selecting five groups of songs based on the song listening stream of the user, wherein the five groups of songs comprise songs liked by the user, and the statistical number of the songs is recorded as n1Completely listening to songs and counting the number to record as n2Downloading songs and counting the number to record as n3Sharing songs and counting the number of the songs as n4And setting background music songs and counting the number to record as n5If the total number of songs recorded in the user's song listening stream data is n ═ n1+n2+n3+n4+n5;
S32, setting weight coefficients for each group of song groups to reflect the song listening preference of the user, and constructing song listening running water song groups; after obtaining the song listening running water song groups, calculating the song listening running water similar song groups according to a filtering mode, namely maintaining a mapping F from the song listening running water song groups to the song listening running water similar song groups, wherein the F is a rule function for calculating the similarity of songs, and the mapping F calculates the similar songs of the song listening running water song groups and arranges the similar songs from high to low according to the similarity;
s4, constructing a deep neural network filtering model, and training and testing; extracting all the running records of the singing of the user from the running water of the singing of the user, dividing the running records into a training set and a test set, wherein the training set is used for training a deep neural network filtering model, and the test set is used for testing the accuracy of the model;
s5, preprocessing the recalled song group, and inputting the preprocessed recalled song group into the trained deep neural network filtering model; the recall song group comprises a user portrait song group and a user song listening running water similar song group;
s51, preprocessing: preliminarily filtering the recalled song group, and filtering out songs which are possibly illegal and unnecessary to recommend;
s52, inputting the preprocessed songs into a trained deep neural network filtering model, wherein the deep neural network filtering model predicts and outputs the probability that each song is liked or cut, and songs with liked probability lower than a custom threshold or with cut probability higher than the custom threshold are filtered; the song input into the deep neural network model has four kinds of characteristic information and combination characteristics thereof, wherein the four kinds of characteristic information are scene characteristic information, song characteristic information, source song characteristic information and user characteristic information; the combination characteristics are counted out through the song listening stream of the user;
s53, after the probability of being liked or cut is predicted by the recalled song group through the deep neural network filtering model, a batch of songs are filtered according to a custom threshold value, and then the filtered songs are sequenced;
the user song listening recording system allocates a globally unique account for each user, is used for recording user information and user operation records, and obtains user portrait characteristics and user song listening pipelining;
the song recall model constructs a recall song group according to a user song listening recording system and a massive song library, wherein the recall song group comprises a user portrait song group and a user song listening running water similar song group;
according to a user song listening recording system, the deep neural network filtering model extracts all user song listening stream records from a user song listening stream, and divides the records into a training set and a test set, wherein the training set is used for training the deep neural network filtering model, and the test set is used for testing the accuracy of the model; and the trained deep neural network filtering model is used for filtering the recall song group recommended by the song recall model, and finally generating a song combination recommended to the user after filtering and sorting by the deep neural network filtering model.
2. The implementation method of claim 1, wherein the step S32 specifically includes the following steps:
s321, constructing a song listening stream song group: assuming a weight coefficient kiRepresenting the proportion of the number of songs taken from the corresponding song group, and setting the song k liked by the user because the song liked by the user and the song listened to the song completely can reflect the song listening preference of the user11, listen to song k in its entirety21, representing that the user likes songs and the user completely listens to songs and all songs are grouped into a song listening assembly;
for downloaded song k3Share song k4And setting a background music song k5: when n is 300 or less, k3=k4=k5If the song is 1, all the downloaded songs, the shared songs and the set background music songs are classified into a song listening running song group; when n is1+n2When the value of (A) is 300 or more, k3=k4=k5If 0, the downloaded song, the shared song and the set background music song are not used; when n is1+n2When the value of (a) is less than 300,nirepresents n3,n4,n5Then from n3,n4,n5Selecting and supplementing 300 songs in proportion, and selecting the songs in sequence from high to low according to the principle of extraction and supplementation according to the operation time of a user, namely extracting the recently downloaded song, the recently shared song and the song with recently set background music in proportion and supplementing the songs into a song group listening to the songs in running water;
s322, after obtaining the song listening running water song group, calculating the song listening running water similar song group according to the item _ cf collaborative filtering mode, wherein the method for calculating the song similarity is carried out according to the following steps:
s3221, counting songs liked by each user based on the song listening pipelining, and maintaining a mapping of Uin to song _ list, wherein the Uin represents a globally unique id of the user, and the song _ list represents a list of songs liked by the user, namely counting which songs are liked by each user;
s3222, based on the statistics of the step S3221, obtaining mapping of song- - > Uin _ list by using a transposition matrix inversion method, wherein song represents id of each song, Uin _ list represents users who like the song, namely, statistics is carried out on which users like each song;
s3223, calculating similarity between two songs songid (i) and songid (j), wherein the specific formula is as follows:
wherein Uin _ list (i) represents the number of users who want to play song i, Uin _ list (j) represents the number of users who like song j, | Uin _ list (i) and | Uin _ list (j) represent the number of users who like both song i and song j; the similarity between any two songs in the song library is obtained by the formula.
3. The method according to claim 1, wherein in step S4, the deep neural network filtering model is built by using a tensoflow frame, and all the user 'S song listening stream records are extracted from the user' S song listening stream, and according to a training set: dividing the test set into a training set and a test set according to the ratio of 3: 1; the user song listening stream record comprises user characteristic information and corresponding song characteristic information, the deep neural network filtering model automatically learns the characteristic information of the user and the song and combines the characteristics of the user and the song in a linear and nonlinear way, and the model is converged through continuous iterative training.
4. The method of claim 1, wherein in step S51, the group of recalled songs is preliminarily filtered according to the following rules:
the user blacklists songs or singers, namely the songs or singers which are definitely disliked by the user and are lost into the garbage can, are filtered; system blacklisting of songs, manually maintaining a list of songs deemed unsuitable for pushing to a user, including interesting vulgar songs, songs with advertisements, and songs or accompaniments related to religion; similar songs in different languages, namely, similar songs are filtered out when the source song and the similar songs are not in the same language; the songs that the user likes, the songs that the user shares, the songs that the user completely listens to the songs, the songs that the user sets the background music, and the songs that the user has listened to recently of 2000 songs are filtered out, that is, the songs that the user has listened to do not need to be recommended in a personalized way.
5. The method according to claim 1, wherein in step S52, the combined features of each song are integrated into a one-dimensional column vector, and the one-dimensional column vector is input into a deep neural network filtering model, which automatically learns the scene feature information, the song feature information, the source song feature information, the user feature information, and the linear and nonlinear relations between the four feature information, and predicts the probability that the current song is liked by the user or the probability that the song is cut from the four feature information.
6. The method according to claim 1, wherein the step S53 of sorting the filtered songs includes the following steps:
s531, arranging the song groups from high to low according to probability values of songs possibly liked by the user;
s532, in order to avoid always recommending songs of the same singer, the singers need to be scattered, the scattering step length is set to be S, the adjacent recommended S songs cannot have the same singer, and the favorite songs of the same singer with low probability are skipped in the step length S;
s533, for enriching the recommended style, inserting songs with liked probability in advance, setting the number of recommended songs to be M, sorting M × 2 songs from high to low according to the liked probability, counting the language proportion, the genre proportion, the year proportion and the rhythm proportion in the front M songs, and placing songs with different languages, or different genres, or different years or different rhythms in the back M songs from the custom threshold value.
7. The method of claim 1, wherein the user information, i.e. user portrait characteristics, comprises: the age of the user, the gender of the user, the occupation of the user, the region where the user is located, the singer preferred by the user, the singer not preferred by the user, the genre of songs not preferred by the user, the language of songs preferred by the user, and the language of songs not preferred by the user;
the user operation record, namely the song listening stream of the user, comprises: songs liked by the user, songs shared by the user, songs downloaded by the user, songs completely listened to by the user, songs commented by the user, songs set as background music by the user, songs lost by the user in a trash can, songs cut by the user, songs recently listened by the user, songs derived from a source song to be liked, songs derived from a source song to be completely listened to by the source song, songs derived from a source song to be cut, songs derived from a source song to be lost in a trash can, and songs derived from a source song to be downloaded; the source song is a song on which similar songs are calculated.
8. The implementation method of claim 1, wherein for the user portrait song group, the song recall model calculates a song group in which each user portrait feature is maintained from a massive song library by using a statistical principle based on the user portrait features, and screens the user portrait song group from the song library according to the user portrait features existing in the user;
for a song group similar to the running water of the songs listened by the user, the song recall model maintains mapping from the running water of the songs listened by the user to a candidate song group according to different user operation records, the candidate song group selects within ten thousand songs from a massive song library at the ten-million level as a song group recommended to the user, then similar songs are calculated according to song lists in different running water operation lists of the target user, and a similar song combination is maintained to construct the song group similar to the running water of the songs listened by the user.
9. The implementation method of claim 1, wherein the deep neural network filtering model is built by adopting a tenserflow framework; and inputting a recalled song group recommended by the song recall model, song characteristics and user portrait characteristics into a deep neural network filtering model for filtering, predicting the song cutting probability or the song liked probability by adopting the deep neural network filtering model in a filtering mode, sequencing according to the probability, filtering out some songs, sequencing again, and recommending an individualized song group to the user.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810213304.4A CN108509534B (en) | 2018-03-15 | 2018-03-15 | Personalized music recommendation system based on deep learning and implementation method thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810213304.4A CN108509534B (en) | 2018-03-15 | 2018-03-15 | Personalized music recommendation system based on deep learning and implementation method thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108509534A CN108509534A (en) | 2018-09-07 |
CN108509534B true CN108509534B (en) | 2022-03-25 |
Family
ID=63377644
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810213304.4A Active CN108509534B (en) | 2018-03-15 | 2018-03-15 | Personalized music recommendation system based on deep learning and implementation method thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108509534B (en) |
Families Citing this family (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109582817A (en) * | 2018-10-30 | 2019-04-05 | 努比亚技术有限公司 | A kind of song recommendations method, terminal and computer readable storage medium |
CN109446350B (en) * | 2018-11-09 | 2022-03-15 | 腾讯音乐娱乐科技(深圳)有限公司 | Multimedia playing method, device, terminal and storage medium |
CN109739972A (en) * | 2018-12-27 | 2019-05-10 | 上海连尚网络科技有限公司 | A kind of novel recommended method and equipment |
US11157557B2 (en) * | 2019-01-18 | 2021-10-26 | Snap Inc. | Systems and methods for searching and ranking personalized videos |
CN109982155B (en) * | 2019-03-25 | 2021-10-12 | 北京奇艺世纪科技有限公司 | Playlist recommendation method and system |
CN110222226B (en) * | 2019-04-17 | 2024-03-12 | 平安科技(深圳)有限公司 | Method, device and storage medium for generating rhythm by words based on neural network |
CN110442746B (en) * | 2019-07-01 | 2023-04-28 | 佛山科学技术学院 | Intelligent music pushing method based on random forest algorithm and storage medium |
CN110675893B (en) * | 2019-09-19 | 2022-04-05 | 腾讯音乐娱乐科技(深圳)有限公司 | Song identification method and device, storage medium and electronic equipment |
CN110807693A (en) * | 2019-11-04 | 2020-02-18 | 上海喜马拉雅科技有限公司 | Album recommendation method, device, equipment and storage medium |
CN110942376B (en) * | 2019-12-02 | 2023-09-01 | 上海麦克风文化传媒有限公司 | Fusion method of real-time multi-recall strategy of audio product |
CN111078931B (en) * | 2019-12-10 | 2023-08-01 | 腾讯科技(深圳)有限公司 | Song list pushing method, device, computer equipment and storage medium |
CN110990621B (en) * | 2019-12-16 | 2023-10-13 | 腾讯科技(深圳)有限公司 | Song recommendation method and device |
CN111209953B (en) * | 2020-01-03 | 2024-01-16 | 腾讯科技(深圳)有限公司 | Recall method, recall device, computer equipment and storage medium for neighbor vector |
CN110930203A (en) * | 2020-02-17 | 2020-03-27 | 京东数字科技控股有限公司 | Information recommendation model training method and device and information recommendation method and device |
CN111324813A (en) * | 2020-02-20 | 2020-06-23 | 深圳前海微众银行股份有限公司 | Recommendation method, device, equipment and computer readable storage medium |
CN111488485B (en) * | 2020-04-16 | 2023-11-17 | 北京雷石天地电子技术有限公司 | Music recommendation method based on convolutional neural network, storage medium and electronic device |
CN112287160B (en) * | 2020-10-28 | 2023-12-12 | 广州欢聊网络科技有限公司 | Method and device for ordering audio data, computer equipment and storage medium |
CN112333596B (en) * | 2020-11-05 | 2024-06-04 | 江苏紫米电子技术有限公司 | Earphone equalizer adjustment method, device, server and medium |
CN112860937B (en) * | 2021-01-28 | 2022-09-02 | 陕西师范大学 | KNN and word embedding based mixed music recommendation method, system and equipment |
CN113055059B (en) * | 2021-02-01 | 2022-08-09 | 厦门大学 | Beam management method for large-scale MIMO millimeter wave communication and millimeter wave base station |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105574216A (en) * | 2016-03-07 | 2016-05-11 | 达而观信息科技(上海)有限公司 | Personalized recommendation method and system based on probability model and user behavior analysis |
CN105930429A (en) * | 2016-04-19 | 2016-09-07 | 乐视控股(北京)有限公司 | Music recommendation method and apparatus |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120023403A1 (en) * | 2010-07-21 | 2012-01-26 | Tilman Herberger | System and method for dynamic generation of individualized playlists according to user selection of musical features |
-
2018
- 2018-03-15 CN CN201810213304.4A patent/CN108509534B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105574216A (en) * | 2016-03-07 | 2016-05-11 | 达而观信息科技(上海)有限公司 | Personalized recommendation method and system based on probability model and user behavior analysis |
CN105930429A (en) * | 2016-04-19 | 2016-09-07 | 乐视控股(北京)有限公司 | Music recommendation method and apparatus |
Non-Patent Citations (1)
Title |
---|
基于DNN算法的移动视频推荐策略;陈亮 等;《计算机学报》;20160831;第39卷(第8期);1626-1638 * |
Also Published As
Publication number | Publication date |
---|---|
CN108509534A (en) | 2018-09-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108509534B (en) | Personalized music recommendation system based on deep learning and implementation method thereof | |
CN104731954B (en) | Music is had an X-rayed based on group and recommends method and system | |
CN110704674B (en) | Video playing integrity prediction method and device | |
CN110297848A (en) | Recommended models training method, terminal and storage medium based on federation's study | |
US20140180762A1 (en) | Systems and methods for customized music selection | |
CN103455538B (en) | Information processing unit, information processing method and program | |
CN106062730A (en) | Systems and methods for actively composing content for use in continuous social communication | |
CN105975496A (en) | Music recommendation method and device based on context sensing | |
CN106326351A (en) | Recommendation system cold start solving method based on user feedback | |
KR20080089545A (en) | Information processing device and method, and program | |
DE102008044635A1 (en) | Apparatus and method for providing a television sequence | |
Oliveira et al. | Detecting Collaboration Profiles in Success-based Music Genre Networks. | |
CN105681908A (en) | Broadcast television system based on individual watching behaviour and personalized programme recommendation method thereof | |
CN109271550A (en) | A kind of music personalization classification recommended method based on deep learning | |
CN107977373A (en) | A kind of recommendation method of song | |
CN105590240A (en) | Discrete calculating method of brand advertisement effect optimization | |
CN107920260A (en) | Digital cable customers behavior prediction method and device | |
CN115470344A (en) | Video barrage and comment theme fusion method based on text clustering | |
CN104268130A (en) | Social advertising facing Twitter feasibility analysis method | |
CN117763228A (en) | Creative expression dynamic adaptation method based on multi-culture framework | |
WO2010119181A1 (en) | Video editing system | |
CN116861063B (en) | Method for exploring commercial value degree of social media hot search | |
KR102643159B1 (en) | A matching method that finds empty space in lcl containers in real time during container import and export | |
CN110442746A (en) | A kind of intelligent music method for pushing and storage medium based on random forests algorithm | |
CN114363660B (en) | Video collection determining method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |