Nothing Special   »   [go: up one dir, main page]

CN118312267B - Interaction method, device, equipment and storage medium based on artificial intelligence - Google Patents

Interaction method, device, equipment and storage medium based on artificial intelligence Download PDF

Info

Publication number
CN118312267B
CN118312267B CN202410721137.XA CN202410721137A CN118312267B CN 118312267 B CN118312267 B CN 118312267B CN 202410721137 A CN202410721137 A CN 202410721137A CN 118312267 B CN118312267 B CN 118312267B
Authority
CN
China
Prior art keywords
interactive
interaction
image
recommended
portrait
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410721137.XA
Other languages
Chinese (zh)
Other versions
CN118312267A (en
Inventor
刘宏
陈军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Pinkuo Information Technology Co ltd
Original Assignee
Shenzhen Pinkuo Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Pinkuo Information Technology Co ltd filed Critical Shenzhen Pinkuo Information Technology Co ltd
Priority to CN202410721137.XA priority Critical patent/CN118312267B/en
Publication of CN118312267A publication Critical patent/CN118312267A/en
Application granted granted Critical
Publication of CN118312267B publication Critical patent/CN118312267B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • G06F9/453Help systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3438Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment monitoring of user actions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2457Query processing with adaptation to user needs
    • G06F16/24578Query processing with adaptation to user needs using ranking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/248Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention relates to the technical field of intelligent interaction, and discloses an interaction method, device, equipment and storage medium based on artificial intelligence, wherein the invention builds an interaction image of a user according to the interaction information by utilizing a pre-trained model, analyzes based on the interaction image, generates a recommended interaction interface and an analysis frame, captures the interaction data of the user in real time, and the behavior is analyzed through the analysis frame, the user portrait is further corrected according to the analysis result, the recommended interaction interface is adjusted according to the updated user portrait, the system can provide more personalized user experience through real-time updating, the interaction interface which is dynamically adjusted enables the user to interact with the system more efficiently, and the problem that the user unfamiliar with the intelligent terminal product in the prior art is difficult to use the intelligent terminal product efficiently is solved.

Description

Interaction method, device, equipment and storage medium based on artificial intelligence
Technical Field
The present invention relates to the field of intelligent interaction technologies, and in particular, to an interaction method, apparatus, device, and storage medium based on artificial intelligence.
Background
With the development of electronic technology, the functions of intelligent terminal products (such as computers and mobile phones) are more complex, so that many functions of the intelligent terminal products cannot be used smoothly for some unfamiliar users, and therefore users purchase high-cost high-performance intelligent terminal products, and as a result, high-end functions corresponding to the high price of the products are not used.
At present, some intelligent terminal products are provided with functions of recommending functions, so that users can be familiar with the operation mode of the intelligent terminal products quickly, however, the recommended function behaviors of the products are relatively fixed, a large number of redundant functions exist for different users, and meanwhile, the functions of the masses which are required to be used by some users are not integrated into the recommended functions.
Disclosure of Invention
The invention aims to provide an interaction method, device, equipment and storage medium based on artificial intelligence, and aims to solve the problem that users unfamiliar with intelligent terminal products in the prior art are difficult to use the intelligent terminal products efficiently.
The present invention is thus achieved, in a first aspect, by providing an artificial intelligence based interaction method, comprising:
Continuously collecting interaction information of an interaction object, and constructing an interaction image of the interaction object according to the interaction information through a pre-trained user portrait intelligent model;
Performing multidimensional analysis processing on the interactive object based on the interactive image to obtain a recommended interactive interface and an interactive analysis frame corresponding to the interactive object;
And acquiring real-time interaction information of the interaction object on the recommended interaction interface, performing interaction behavior analysis processing on the real-time interaction information according to the interaction analysis frame so as to correct the interaction image of the interaction object, and adjusting the recommended interaction interface based on the corrected interaction image.
Preferably, the step of continuously collecting the interaction information of the interaction object and constructing the interaction image of the interaction object according to the interaction information through a pre-trained user portrait intelligent model comprises the following steps:
collecting interaction behaviors of the interaction objects, recording type marks and time marks corresponding to the interaction behaviors, and binding the type marks and the time marks with the interaction behaviors to obtain interaction information of the interaction objects;
performing object feature extraction processing of an interactive object on the interactive information according to a pre-trained user portrait intelligent model so as to obtain a plurality of object features of the interactive object;
Performing object tracing processing on each object feature to obtain an object vector group of the interactive object pointed by each object feature; the object vector group comprises a plurality of object image intervals of which object features point to the interactive object and interval confident degrees corresponding to the object image intervals, wherein the object image intervals are used for describing one interactive image of the interactive object, and the interval confident degrees are used for describing the possibility degree of the interactive object corresponding to the object image intervals;
carrying out integrated analysis processing on each object vector group to obtain the overall certainty factor of each object image interval; the overall certainty factor is a superposition result of the interval certainty factor of each object vector group in the object image interval;
And judging the overall certainty factor of each object image section according to the certainty standard so as to exclude the object image sections which do not meet the certainty standard, and taking the object image sections which meet the certainty standard as the interactive image of the interactive object.
Preferably, the step of pre-training the user portrayal smart model comprises:
acquiring a plurality of sets of training data; the training data comprises interactive information data and object feature data, wherein the interactive information data is used for describing interactive information of an interactive object, and the object feature data is used for describing object features of the interactive object;
Constructing an input layer, a convolution layer, three full connection layers and an output layer;
Substituting each set of the training data into the input layer;
The input level receives the collected training data of each group and transmits the training data of each group to the convolution layer, and the convolution layer is used for carrying out characteristic collection on the training data of each group so as to obtain interactive mapping characteristics of the training data of each group; the interactive mapping feature is used for describing a mapping relation between the interactive information data and the object feature data in the training data, and the mapping relation is used for carrying out mapping processing on the interactive information so as to obtain the object feature corresponding to the interactive information;
The three full connection layers are used for carrying out continuous vector flattening processing on the various interactive mapping features extracted by the convolution layer so as to flatten the various interactive mapping features into one-dimensional vector features; the one-dimensional vector features are used for carrying out basic graphic expression on various interactive mapping features;
The output layer is used for outputting the one-dimensional vector features expanded by the full connection layer.
Preferably, the step of performing multidimensional analysis processing on the interactive object based on the interactive image to obtain a recommended interactive interface and an interactive analysis frame corresponding to the interactive object includes:
Analyzing and processing object recommendation functions of each object image section of the interactive image respectively to obtain recommendation functions of each object image section corresponding to the interactive image and function attribution labels corresponding to each recommendation function, and generating corresponding function priority indexes of the recommendation functions according to the overall certainty factor of each object image section;
According to the function attribution labels and the function priority indexes of the recommended functions, listing the recommended functions to obtain a recommended function list; the recommending function list is provided with a parallel structure and a nested structure, and each recommending function is arranged in the recommending function list in the form of the parallel structure or the nested structure;
Generating corresponding function link ports according to the recommended functions; the function link port is used for enabling the interaction object to realize function interaction with the recommendation function;
According to the recommended function list, performing tabulation processing on each function link port to obtain a recommended interaction interface corresponding to the interaction object;
Performing predictive analysis processing on the interaction behavior based on the recommended interaction interface to obtain a plurality of possible interaction behaviors of the interaction object on the recommended interaction interface, and performing expansion analysis processing on the interaction portrait according to various possible interaction behaviors to obtain a plurality of portrait modification orientations of the interaction portrait; wherein the portrait modification points to a modification direction of an interactive portrait for describing the interactive object;
and pointing the various portrait corrections of the interactive portrait to an interactive parsing framework which is used as the interactive portrait together.
Preferably, the step of acquiring real-time interaction information of the interaction object on the recommended interaction interface, performing interaction behavior analysis processing on the real-time interaction information according to the interaction analysis frame to correct an interaction image of the interaction object, and performing adjustment processing on the recommended interaction interface based on the corrected interaction image includes:
acquiring real-time interaction information of the interaction object on the recommended interaction interface;
Performing interactive behavior analysis processing on the real-time interactive information according to the interactive analysis frame to obtain correction vectors of the real-time interactive information corresponding to the portrait correction directions; the correction vector is used for describing the tendency degree of the real-time interaction information in each portrait correction direction;
based on the correction vectors of the real-time interaction information corresponding to the image correction directions, correcting the interaction images;
Generating a plurality of new recommended functions based on the revised interactive image, and analyzing and processing list positions of the new recommended functions based on the recommended function list to obtain setting positions of the new recommended functions in the recommended function list;
generating corresponding newly-added link ports according to the newly-added recommending functions, and setting the newly-added link ports corresponding to the newly-added recommending functions at corresponding positions in the recommending interactive interface according to the setting positions of the newly-added recommending functions in the recommending function list.
Preferably, the step of performing interactive behavior analysis processing on the real-time interactive information according to the interactive analysis framework to obtain correction vectors of the real-time interactive information corresponding to the portrait correction orientations includes:
According to each portrait correction direction of the interactive analysis frame, analyzing and processing the direction degree of the real-time interactive information to obtain a preliminary vector between the real-time interactive information and each portrait correction direction;
Analyzing and processing mutual rejection degree of the preliminary vectors between the real-time interaction information and each portrait correction direction to obtain rejection parameters between the preliminary vectors; the rejection parameters are used for describing mutual exclusivity among the pointing degrees of the image correction orientations fed back by the preliminary vectors;
And carrying out vector adjustment processing on each preliminary vector based on rejection parameters among the preliminary vectors to obtain each correction vector corresponding to the minimum rejection parameter.
Preferably, the step of correcting the interactive portrait based on the correction vector to which the real-time interactive information corresponds to each portrait correction direction includes:
taking each object image interval fed back by the interactive image as a reference interval;
Generating corresponding target sections according to the image correction directions by taking the reference section as a reference; the target interval is used for describing the object image interval when the correction vector fully points to the portrait correction pointing;
And performing conversion processing on each target section according to each correction vector to obtain a correction section corresponding to each correction vector, performing intersection analysis on the reference section and each correction section to obtain an intersection section, and taking the intersection section as the interaction image after correction processing.
In a second aspect, the present invention provides an interaction device based on artificial intelligence, configured to implement an interaction method based on artificial intelligence according to any one of the first aspect, including:
the portrait construction module is used for continuously collecting interaction information of the interaction object and constructing an interaction portrait of the interaction object according to the interaction information through a pre-trained user portrait intelligent model;
The portrait analysis module is used for carrying out multidimensional analysis processing on the interactive objects based on the interactive images to obtain recommended interactive interfaces and interactive analysis frames corresponding to the interactive objects;
The interactive correction module is used for acquiring real-time interactive information of the interactive object on the recommended interactive interface, carrying out interactive behavior analysis processing on the real-time interactive information according to the interactive analysis frame so as to correct the interactive image of the interactive object, and carrying out adjustment processing on the recommended interactive interface based on the corrected interactive image.
In a third aspect, the present invention provides a computer device comprising a memory and a processor, the memory storing a computer program executable on the processor, the processor implementing an artificial intelligence based interaction method of any of the first aspects when the computer program is executed.
In a fourth aspect, the present invention provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, causes the processor to perform an artificial intelligence based interaction method according to any of the first aspects.
The invention provides an interaction method based on artificial intelligence, which has the following beneficial effects:
According to the invention, the interactive information of the user is continuously collected, the interactive image of the user is constructed according to the interactive information by utilizing the pre-trained model, analysis is carried out based on the interactive image, a recommended interactive interface and an analysis frame are generated, user interactive data are captured in real time, the behavior is analyzed through the analysis frame, the user image is further corrected according to the analysis result, the recommended interactive interface is adjusted according to the updated user image, more personalized user experience can be provided by the system through real-time updating, the dynamically adjusted interactive interface enables the user to interact with the system more efficiently, and the problem that the user unfamiliar with intelligent terminal products in the prior art is difficult to use the intelligent terminal products efficiently is solved.
Drawings
FIG. 1 is a schematic diagram of steps of an interaction method based on artificial intelligence according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an interaction device based on artificial intelligence according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The implementation of the present invention will be described in detail below with reference to specific embodiments.
Referring to fig. 1 and 2, a preferred embodiment of the present invention is provided.
In a first aspect, the present invention provides an artificial intelligence based interaction method, comprising:
S1: continuously collecting interaction information of an interaction object, and constructing an interaction image of the interaction object according to the interaction information through a pre-trained user portrait intelligent model;
S2: performing multidimensional analysis processing on the interactive object based on the interactive image to obtain a recommended interactive interface and an interactive analysis frame corresponding to the interactive object;
s3: and acquiring real-time interaction information of the interaction object on the recommended interaction interface, performing interaction behavior analysis processing on the real-time interaction information according to the interaction analysis frame so as to correct the interaction image of the interaction object, and adjusting the recommended interaction interface based on the corrected interaction image.
Specifically, in step S1 of the embodiment provided by the present invention, the interaction behavior of the user in the interaction terminal, such as clicking, scrolling, browsing, searching, etc., is monitored and recorded in real time, and it is noted that the monitored items also include the content of specific interactions of the user, the number of times, frequency, order, etc. of various interaction behaviors.
More specifically, the interactive terminal may be a personal computer, a smart phone, or other smart terminal products capable of executing software programs.
More specifically, using log systems, event tracking, and other monitoring tools to collect data, useful information may be further extracted as features from the raw interaction data, which may include user behavior patterns, preferences, and the like.
More specifically, a machine learning model is trained using the historical data and the extracted features to construct a user representation model, real-time interaction data is input into the trained user representation model, and a real-time user interaction representation is generated.
It can be appreciated that by continuously collecting user interaction information, the intelligent model can more accurately understand the behavior and preference of the user, so that the accuracy of the user portrait is improved, and the system can be helped to provide more personalized services and contents by the user portrait refinement.
Specifically, in step S2 of the embodiment provided by the present invention, multi-angle analysis is performed on the features in the interactive image, and the historical data and the real-time interactive data of the user are combined to predict the possible future demands and behaviors of the user.
More specifically, according to the characteristics in the interactive image and the prediction of the user demand, a personalized interactive interface is designed, wherein the interactive interface is a port for setting functional links, so that the user can directly use various functions of the interactive terminal through the ports without searching the functions through the interface in the original design of the interactive terminal.
More specifically, a framework is developed for the recommended interactive interface, so that the interactive behavior of the user can be analyzed in real time, more and more accurate user portraits of the user fed back by the real-time interactive information of the user in the recommended interactive interface are identified, and the recommended interactive interface and the interactive analysis framework are continuously optimized according to the user feedback and the interactive data.
Specifically, in step S3 of the embodiment provided by the present invention, behavior data of a user on a recommended interactive interface is obtained in real time, including browsing and using the recommended interactive interface by the user.
More specifically, the real-time data is analyzed by using the interactive parsing framework to identify the behavior patterns and intentions of the user, and to process and interpret the interactive behavior of the user, and the existing interactive portraits are revised and updated according to the real-time interactive information of the user, so as to ensure that the portraits reflect the latest user preferences and behaviors.
More specifically, according to the revised interactive portraits, the recommended interactive interface is dynamically adjusted, such as changing the ordering of the recommended content, adjusting the layout of the UI elements, and the like, and simultaneously, a user feedback mechanism, such as scoring, commenting or direct feedback buttons, can be integrated in the interface, so that users can directly provide comments on the recommended content or interface design, and the user portraits and the interface can be further refined by utilizing the feedback.
More specifically, iterative optimization is performed according to the monitoring analysis result, the interactive analysis frame and the recommended interactive interface are continuously adjusted, and a development flow of rapid iteration is maintained so as to ensure the sensitivity and the adaptability of the system.
It can be appreciated that by monitoring and updating the user portraits in real time, the system can more accurately predict user needs, provide a more personalized interactive experience, and the user portraits updated in real time can more accurately reflect the current preferences of the users, thereby improving the accuracy of the recommendation system.
The invention provides an interaction method based on artificial intelligence, which has the following beneficial effects:
According to the invention, the interactive information of the user is continuously collected, the interactive image of the user is constructed according to the interactive information by utilizing the pre-trained model, analysis is carried out based on the interactive image, a recommended interactive interface and an analysis frame are generated, user interactive data are captured in real time, the behavior is analyzed through the analysis frame, the user image is further corrected according to the analysis result, the recommended interactive interface is adjusted according to the updated user image, more personalized user experience can be provided by the system through real-time updating, the dynamically adjusted interactive interface enables the user to interact with the system more efficiently, and the problem that the user unfamiliar with intelligent terminal products in the prior art is difficult to use the intelligent terminal products efficiently is solved.
Preferably, the step of continuously collecting the interaction information of the interaction object and constructing the interaction image of the interaction object according to the interaction information through a pre-trained user portrait intelligent model comprises the following steps:
S11: collecting interaction behaviors of the interaction objects, recording type marks and time marks corresponding to the interaction behaviors, and binding the type marks and the time marks with the interaction behaviors to obtain interaction information of the interaction objects;
s12: performing object feature extraction processing of an interactive object on the interactive information according to a pre-trained user portrait intelligent model so as to obtain a plurality of object features of the interactive object;
S13: performing object tracing processing on each object feature to obtain an object vector group of the interactive object pointed by each object feature; the object vector group comprises a plurality of object image intervals of which object features point to the interactive object and interval confident degrees corresponding to the object image intervals, wherein the object image intervals are used for describing one interactive image of the interactive object, and the interval confident degrees are used for describing the possibility degree of the interactive object corresponding to the object image intervals;
S14: carrying out integrated analysis processing on each object vector group to obtain the overall certainty factor of each object image interval; the overall certainty factor is a superposition result of the interval certainty factor of each object vector group in the object image interval;
S15: and judging the overall certainty factor of each object image section according to the certainty standard so as to exclude the object image sections which do not meet the certainty standard, and taking the object image sections which meet the certainty standard as the interactive image of the interactive object.
Specifically, the behavior of the user is monitored and recorded, including clicking, browsing and the like, the captured behavior is marked with a time stamp and a type label, each behavior is ensured to have a definite time and type context, and the time stamp and the type label are bound with the corresponding user behavior to form structured interaction data.
More specifically, the pre-trained user portrait intelligent model is utilized to analyze interaction data, extract object features of the user, wherein the object features can comprise interests, preferences, behavior patterns and the like of the user, trace each feature to source, determine contribution of the feature to the user portrait and form an object vector group, each object feature corresponds to a portrait section and a certainty factor, the portrait section describes tendency of the user in a certain aspect, and the certainty factor represents probability of the tendency.
More specifically, all the object vector groups are integrated, the confident degrees of the image sections are overlapped to form the overall confident degrees, and the step is the key of synthesizing the user interaction image and aims at forming a comprehensive and multi-dimensional user characteristic representation.
More specifically, the overall certainty factor is evaluated according to the set certainty criteria, those image intervals which do not meet the criteria are eliminated, it is ensured that the final user interaction image only includes those features with high certainty, all image intervals which meet the certainty criteria are integrated into the user interaction image, and this image represents the behavior pattern and preference of the user for subsequent personalized recommendation and interaction interface adjustment.
It can be understood that through detailed behavior tracking and feature extraction, the constructed user portrait is more comprehensive, the real demands and preferences of the user can be reflected more accurately, the accurate user portrait can greatly improve the correlation of a recommendation system, so that the user satisfaction degree and participation degree are improved, and the user interaction interface can be adjusted in real time according to the user portrait, so that more personalized and attractive user experience is provided.
Preferably, the step of pre-training the user portrayal smart model comprises:
s121: acquiring a plurality of sets of training data; the training data comprises interactive information data and object feature data, wherein the interactive information data is used for describing interactive information of an interactive object, and the object feature data is used for describing object features of the interactive object;
S122: constructing an input layer, a convolution layer, three full connection layers and an output layer;
S123: substituting each set of the training data into the input layer;
S124: the input level receives the collected training data of each group and transmits the training data of each group to the convolution layer, and the convolution layer is used for carrying out characteristic collection on the training data of each group so as to obtain interactive mapping characteristics of the training data of each group; the interactive mapping feature is used for describing a mapping relation between the interactive information data and the object feature data in the training data, and the mapping relation is used for carrying out mapping processing on the interactive information so as to obtain the object feature corresponding to the interactive information;
S125: the three full connection layers are used for carrying out continuous vector flattening processing on the various interactive mapping features extracted by the convolution layer so as to flatten the various interactive mapping features into one-dimensional vector features; the one-dimensional vector features are used for carrying out basic graphic expression on various interactive mapping features;
s126: the output layer is used for outputting the one-dimensional vector features expanded by the full connection layer.
Specifically, sets of interaction information data and object feature data are prepared, which should represent the interaction behavior of the user and its corresponding features.
More specifically, a neural network model architecture is designed that includes an input layer, a convolution layer, three full-connection layers, and an output layer, each of which is designed with a specific function and connection mode, and training data is sent to the input layer, where the data needs to be subjected to a certain preprocessing, such as normalization.
More specifically, in the convolution layer, feature extraction is performed on input data to obtain interactive mapping features, local features and modes in the data are identified by utilizing a filter of the convolution layer, the features extracted by the convolution layer are further abstracted and integrated through three layers of full-connection layers, and the layers map and flatten high-dimensional features into one-dimensional vectors, so that the processing of an output layer is facilitated.
More specifically, the output layer is responsible for outputting one-dimensional vector features flattened by the full link layer, which feature vectors will be used for the next interactive portrait construction or other related tasks.
It can be understood that the convolution layer can effectively extract local features and modes in the interactive data, so that understanding of the model on user behaviors is improved, the full-connection layer converts high-dimensional features into one-dimensional vectors, the features are comprehensively represented in final user images, the model can be better generalized to unseen data through training of a large amount of data, accuracy of prediction and classification is improved, the model can construct more detailed and accurate user images based on complex data relations, and compared with a traditional feature engineering method, the automatic feature extraction method can obtain more accurate feature expression more rapidly.
Preferably, the step of performing multidimensional analysis processing on the interactive object based on the interactive image to obtain a recommended interactive interface and an interactive analysis frame corresponding to the interactive object includes:
S21: analyzing and processing object recommendation functions of each object image section of the interactive image respectively to obtain recommendation functions of each object image section corresponding to the interactive image and function attribution labels corresponding to each recommendation function, and generating corresponding function priority indexes of the recommendation functions according to the overall certainty factor of each object image section;
S22: according to the function attribution labels and the function priority indexes of the recommended functions, listing the recommended functions to obtain a recommended function list; the recommending function list is provided with a parallel structure and a nested structure, and each recommending function is arranged in the recommending function list in the form of the parallel structure or the nested structure;
S23: generating corresponding function link ports according to the recommended functions; the function link port is used for enabling the interaction object to realize function interaction with the recommendation function;
S24: according to the recommended function list, performing tabulation processing on each function link port to obtain a recommended interaction interface corresponding to the interaction object;
s25: performing predictive analysis processing on the interaction behavior based on the recommended interaction interface to obtain a plurality of possible interaction behaviors of the interaction object on the recommended interaction interface, and performing expansion analysis processing on the interaction portrait according to various possible interaction behaviors to obtain a plurality of portrait modification orientations of the interaction portrait; wherein the portrait modification points to a modification direction of an interactive portrait for describing the interactive object;
S26: and pointing the various portrait corrections of the interactive portrait to an interactive parsing framework which is used as the interactive portrait together.
Specifically, different object image sections in the user portrait are analyzed, corresponding functions are recommended for each section, function attribution labels are allocated, and a function priority index is generated based on the certainty factor of the object image sections.
More specifically, the recommended functions are ordered and listed according to the attribution labels and the priority index, and a recommended function list with a parallel structure and a nested structure is formed.
More specifically, a corresponding function link port is generated for each recommended function, and interactive functions can be directly realized through the function link ports, so that the situation that a user needs to find and use the functions according to the inherent design of the interactive terminal in the traditional design is avoided, and the recommended interactive interface for the user is designed by listing the function link ports according to a recommended function list.
More specifically, through predicting and analyzing the interaction behavior of the recommended interface, the possible interaction behavior of the user is deduced, further analysis and correction guidance are carried out on the user portrait according to the predictions, portrait correction is directed to be integrated, and an interaction analysis frame is formed and used for guiding the correction direction of the user interaction portrait.
It can be understood that the customized interaction interface aiming at the user portrait is created according to the specific portrait recommendation personalized function of the user, so that the intuitiveness and usability of the interface are improved, and the user experience is enhanced.
Meanwhile, an interaction analysis frame corresponding to the recommended interaction interface is generated, further analysis and reaction can be carried out on the interaction behavior of the user on the recommended interaction interface, a direction is provided for continuous optimization of the user portrait, and the user portrait is dynamically updated along with the change of the user behavior.
Preferably, the step of acquiring real-time interaction information of the interaction object on the recommended interaction interface, performing interaction behavior analysis processing on the real-time interaction information according to the interaction analysis frame to correct an interaction image of the interaction object, and performing adjustment processing on the recommended interaction interface based on the corrected interaction image includes:
S31: acquiring real-time interaction information of the interaction object on the recommended interaction interface;
S32: performing interactive behavior analysis processing on the real-time interactive information according to the interactive analysis frame to obtain correction vectors of the real-time interactive information corresponding to the portrait correction directions; the correction vector is used for describing the tendency degree of the real-time interaction information in each portrait correction direction;
S33: based on the correction vectors of the real-time interaction information corresponding to the image correction directions, correcting the interaction images;
S34: generating a plurality of new recommended functions based on the revised interactive image, and analyzing and processing list positions of the new recommended functions based on the recommended function list to obtain setting positions of the new recommended functions in the recommended function list;
S35: generating corresponding newly-added link ports according to the newly-added recommending functions, and setting the newly-added link ports corresponding to the newly-added recommending functions at corresponding positions in the recommending interactive interface according to the setting positions of the newly-added recommending functions in the recommending function list.
Specifically, real-time interaction data of a user on a recommended interaction interface is collected, and the real-time interaction information is analyzed by utilizing an interaction analysis framework to determine the tendency degree of the real-time interaction information in each portrait correction direction, namely a correction vector.
More specifically, the interactive portrait of the user is updated and revised in real time according to the revised vector, a new recommended function is generated according to the revised portrait of the user, the proper position of the new recommended function in the existing recommended function list is analyzed and determined, a corresponding new link port is created for the new recommended function, and the ports are arranged to the corresponding positions in the recommended interactive interface according to the positions in the function list.
It can be understood that through real-time interaction data analysis, the user portraits can be dynamically updated to reflect the latest preferences and behaviors of the user, the recommended interaction interface can be adjusted according to the dynamically updated user portraits so as to better meet the personalized demands of the user, by correcting the user portraits, the system can more accurately predict the user demands, thereby providing more fitting function recommendation, optimizing the layout of the recommended interaction interface according to the importance of the newly added function and the user demands, improving the operation convenience of the user, and through continuous interaction analysis and interface adjustment, the system aims at providing a smoother and more visual user experience.
Preferably, the step of performing interactive behavior analysis processing on the real-time interactive information according to the interactive analysis framework to obtain correction vectors of the real-time interactive information corresponding to the portrait correction orientations includes:
S321: according to each portrait correction direction of the interactive analysis frame, analyzing and processing the direction degree of the real-time interactive information to obtain a preliminary vector between the real-time interactive information and each portrait correction direction;
S322: analyzing and processing mutual rejection degree of the preliminary vectors between the real-time interaction information and each portrait correction direction to obtain rejection parameters between the preliminary vectors; the rejection parameters are used for describing mutual exclusivity among the pointing degrees of the image correction orientations fed back by the preliminary vectors;
S323: and carrying out vector adjustment processing on each preliminary vector based on rejection parameters among the preliminary vectors to obtain each correction vector corresponding to the minimum rejection parameter.
Specifically, for each portrait modification orientation defined in the interactive parsing framework, real-time interactive information is analyzed, and preliminary vectors characterizing the degrees of these orientations are generated.
More specifically, the repellency between different preliminary vectors, i.e. whether there is mutual exclusion or conflict of the user behavior trends represented by two or more vectors, is analyzed, and a repellency parameter is calculated, which describes the degree of repellency between the user behavior trends represented by the respective preliminary vectors.
More specifically, the preliminary vectors are adjusted according to the rejection parameters to reduce mutual exclusivity between the vectors, resulting in final correction vectors having minimized rejection parameters.
It can be appreciated that by considering the rejection between behavior trends, a more accurate user behavior correction vector can be generated, the accuracy of the user portrait is improved, analysis of rejection parameters and adjustment of vectors are helpful for improving the decision efficiency of a recommendation system, error recommendation is reduced, and the adjusted correction vector can better reflect the real preference of the user, so that more personalized user experience is provided.
Preferably, the step of correcting the interactive portrait based on the correction vector to which the real-time interactive information corresponds to each portrait correction direction includes:
S331: taking each object image interval fed back by the interactive image as a reference interval;
s332: generating corresponding target sections according to the image correction directions by taking the reference section as a reference; the target interval is used for describing the object image interval when the correction vector fully points to the portrait correction pointing;
S333: and performing conversion processing on each target section according to each correction vector to obtain a correction section corresponding to each correction vector, performing intersection analysis on the reference section and each correction section to obtain an intersection section, and taking the intersection section as the interaction image after correction processing.
Specifically, the existing object image sections in the interactive image are used as reference sections, and these sections represent the current image state of the user.
More specifically, depending on the image correction direction, a series of target sections representing ideal states that the user image should reach if the correction vector is fully directed in a particular image correction direction are generated using the reference section as a starting point.
More specifically, the target interval is transformed using the correction vector to obtain a corrected interval, which involves scaling, shifting or other transformation to reflect the user representation changes under the influence of the correction vector.
More specifically, intersection analysis is performed on the reference section and all the correction sections, and a common part of the sections, namely the intersection section, is determined, which represents a user portrait after all correction factors are considered, and the intersection section obtained through analysis is used as an interactive portrait after correction processing, and the portrait more accurately reflects the current interaction situation and preference of the user.
It can be understood that through the process, the interactive image of the user can be ensured to be continuously updated, the latest behavior and preference of the user are reflected in real time, and the more accurate user image can improve the matching degree of a recommendation system, so that the correlation of the recommended content and the satisfaction degree of the user are improved.
Referring to fig. 2, in a second aspect, the present invention provides an interaction device based on artificial intelligence, for implementing an interaction method based on artificial intelligence according to any one of the first aspect, including:
the portrait construction module is used for continuously collecting interaction information of the interaction object and constructing an interaction portrait of the interaction object according to the interaction information through a pre-trained user portrait intelligent model;
The portrait analysis module is used for carrying out multidimensional analysis processing on the interactive objects based on the interactive images to obtain recommended interactive interfaces and interactive analysis frames corresponding to the interactive objects;
The interactive correction module is used for acquiring real-time interactive information of the interactive object on the recommended interactive interface, carrying out interactive behavior analysis processing on the real-time interactive information according to the interactive analysis frame so as to correct the interactive image of the interactive object, and carrying out adjustment processing on the recommended interactive interface based on the corrected interactive image.
In this embodiment, for specific implementation of each module in the above embodiment of the apparatus, please refer to the description in the above embodiment of the method, and no further description is given here.
In a third aspect, the present invention provides a computer device comprising a memory and a processor, the memory storing a computer program executable on the processor, the processor implementing an artificial intelligence based interaction method of any of the first aspects when the computer program is executed.
In a fourth aspect, the present invention provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, causes the processor to perform an artificial intelligence based interaction method according to any of the first aspects.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the invention.

Claims (7)

1. An artificial intelligence based interaction method, comprising:
Continuously collecting interaction information of an interaction object, and constructing an interaction image of the interaction object according to the interaction information through a pre-trained user portrait intelligent model;
Performing multidimensional analysis processing on the interactive object based on the interactive image to obtain a recommended interactive interface and an interactive analysis frame corresponding to the interactive object;
Acquiring real-time interaction information of the interaction object on the recommended interaction interface, performing interaction behavior analysis processing on the real-time interaction information according to the interaction analysis frame so as to correct an interaction image of the interaction object, and adjusting the recommended interaction interface based on the corrected interaction image;
The method for continuously collecting the interaction information of the interaction object and constructing the interaction image of the interaction object according to the interaction information through a pre-trained user portrait intelligent model comprises the following steps:
collecting interaction behaviors of the interaction objects, recording type marks and time marks corresponding to the interaction behaviors, and binding the type marks and the time marks with the interaction behaviors to obtain interaction information of the interaction objects;
performing object feature extraction processing of an interactive object on the interactive information according to a pre-trained user portrait intelligent model so as to obtain a plurality of object features of the interactive object;
Performing object tracing processing on each object feature to obtain an object vector group of the interactive object pointed by each object feature; the object vector group comprises a plurality of object image intervals of which object features point to the interactive object and interval confident degrees corresponding to the object image intervals, wherein the object image intervals are used for describing one interactive image of the interactive object, and the interval confident degrees are used for describing the possibility degree of the interactive object corresponding to the object image intervals;
carrying out integrated analysis processing on each object vector group to obtain the overall certainty factor of each object image interval; the overall certainty factor is a superposition result of the interval certainty factor of each object vector group in the object image interval;
Judging the overall certainty factor of each object image section according to a certainty factor to exclude the object image sections which do not meet the certainty factor, and taking the object image sections which meet the certainty factor as the interactive image of the interactive object;
the step of carrying out multidimensional analysis processing on the interactive object based on the interactive image to obtain a recommended interactive interface and an interactive analysis frame corresponding to the interactive object comprises the following steps:
Analyzing and processing object recommendation functions of each object image section of the interactive image respectively to obtain recommendation functions of each object image section corresponding to the interactive image and function attribution labels corresponding to each recommendation function, and generating corresponding function priority indexes of the recommendation functions according to the overall certainty factor of each object image section;
According to the function attribution labels and the function priority indexes of the recommended functions, listing the recommended functions to obtain a recommended function list; the recommending function list is provided with a parallel structure and a nested structure, and each recommending function is arranged in the recommending function list in the form of the parallel structure or the nested structure;
Generating corresponding function link ports according to the recommended functions; the function link port is used for enabling the interaction object to realize function interaction with the recommendation function;
According to the recommended function list, performing tabulation processing on each function link port to obtain a recommended interaction interface corresponding to the interaction object;
Performing predictive analysis processing on the interaction behavior based on the recommended interaction interface to obtain a plurality of possible interaction behaviors of the interaction object on the recommended interaction interface, and performing expansion analysis processing on the interaction portrait according to the plurality of possible interaction behaviors to obtain a plurality of portrait modification orientations of the interaction portrait; wherein the portrait modification points to a modification direction of an interactive portrait for describing the interactive object;
an interactive resolution framework for pointing the various portrait corrections of the interactive portraits together as the interactive portraits;
The method comprises the steps of obtaining real-time interaction information of the interaction object on the recommended interaction interface, carrying out interaction behavior analysis processing on the real-time interaction information according to the interaction analysis frame so as to carry out correction processing on an interaction image of the interaction object, and carrying out adjustment processing on the recommended interaction interface based on the corrected interaction image, wherein the steps comprise:
acquiring real-time interaction information of the interaction object on the recommended interaction interface;
Performing interactive behavior analysis processing on the real-time interactive information according to the interactive analysis frame to obtain correction vectors of the real-time interactive information corresponding to the portrait correction directions; the correction vector is used for describing the tendency degree of the real-time interaction information in each portrait correction direction;
based on the correction vectors of the real-time interaction information corresponding to the image correction directions, correcting the interaction images;
Generating a plurality of new recommended functions based on the revised interactive image, and analyzing and processing list positions of the new recommended functions based on the recommended function list to obtain setting positions of the new recommended functions in the recommended function list;
generating corresponding newly-added link ports according to the newly-added recommending functions, and setting the newly-added link ports corresponding to the newly-added recommending functions at corresponding positions in the recommending interactive interface according to the setting positions of the newly-added recommending functions in the recommending function list.
2. The artificial intelligence based interaction method of claim 1, wherein the step of pre-training the user portrayal intelligence model comprises:
acquiring a plurality of sets of training data; the training data comprises interactive information data and object feature data, wherein the interactive information data is used for describing interactive information of an interactive object, and the object feature data is used for describing object features of the interactive object;
Constructing an input layer, a convolution layer, three full connection layers and an output layer;
Substituting each set of the training data into the input layer;
The input level receives the collected training data of each group and transmits the training data of each group to the convolution layer, and the convolution layer is used for carrying out characteristic collection on the training data of each group so as to obtain interactive mapping characteristics of the training data of each group; the interactive mapping feature is used for describing a mapping relation between the interactive information data and the object feature data in the training data, and the mapping relation is used for carrying out mapping processing on the interactive information so as to obtain the object feature corresponding to the interactive information;
The three full connection layers are used for carrying out continuous vector flattening processing on the various interactive mapping features extracted by the convolution layer so as to flatten the various interactive mapping features into one-dimensional vector features; the one-dimensional vector features are used for carrying out basic graphic expression on various interactive mapping features;
the output layer is used for outputting the unfolded one-dimensional vector features.
3. The interactive method according to claim 1, wherein the step of performing interactive behavior analysis processing on the real-time interactive information according to the interactive analysis framework to obtain correction vectors of the real-time interactive information corresponding to the portrait correction orientations comprises:
According to each portrait correction direction of the interactive analysis frame, analyzing and processing the direction degree of the real-time interactive information to obtain a preliminary vector between the real-time interactive information and each portrait correction direction;
Analyzing and processing mutual rejection degree of the preliminary vectors between the real-time interaction information and each portrait correction direction to obtain rejection parameters between the preliminary vectors; the rejection parameters are used for describing mutual exclusivity among the pointing degrees of the image correction orientations fed back by the preliminary vectors;
And carrying out vector adjustment processing on each preliminary vector based on rejection parameters among the preliminary vectors to obtain each correction vector corresponding to the minimum rejection parameter.
4. The artificial intelligence based interactive method of claim 1, wherein the step of modifying the interactive portraits based on the real-time interactive information corresponding to the correction vectors to which the portraits are modified, comprises:
taking each object image interval fed back by the interactive image as a reference interval;
Generating corresponding target sections according to the image correction directions by taking the reference section as a reference; the target interval is used for describing the object image interval when the correction vector fully points to the portrait correction pointing;
And performing conversion processing on each target section according to each correction vector to obtain a correction section corresponding to each correction vector, performing intersection analysis on the reference section and each correction section to obtain an intersection section, and taking the intersection section as the interaction image after correction processing.
5. An artificial intelligence based interaction device for implementing an artificial intelligence based interaction method as claimed in any one of claims 1 to 4, comprising:
the portrait construction module is used for continuously collecting interaction information of the interaction object and constructing an interaction portrait of the interaction object according to the interaction information through a pre-trained user portrait intelligent model;
The portrait analysis module is used for carrying out multidimensional analysis processing on the interactive objects based on the interactive images to obtain recommended interactive interfaces and interactive analysis frames corresponding to the interactive objects;
The interactive correction module is used for acquiring real-time interactive information of the interactive object on the recommended interactive interface, carrying out interactive behavior analysis processing on the real-time interactive information according to the interactive analysis frame so as to correct the interactive image of the interactive object, and carrying out adjustment processing on the recommended interactive interface based on the corrected interactive image.
6. A computer device comprising a memory and a processor, the memory storing a computer program executable on the processor, characterized in that the processor implements an artificial intelligence based interaction method according to any of claims 1 to 4 when the computer program is executed.
7. A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor, causes the processor to perform an artificial intelligence based interaction method according to any of claims 1-4.
CN202410721137.XA 2024-06-05 2024-06-05 Interaction method, device, equipment and storage medium based on artificial intelligence Active CN118312267B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410721137.XA CN118312267B (en) 2024-06-05 2024-06-05 Interaction method, device, equipment and storage medium based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410721137.XA CN118312267B (en) 2024-06-05 2024-06-05 Interaction method, device, equipment and storage medium based on artificial intelligence

Publications (2)

Publication Number Publication Date
CN118312267A CN118312267A (en) 2024-07-09
CN118312267B true CN118312267B (en) 2024-08-13

Family

ID=91727641

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410721137.XA Active CN118312267B (en) 2024-06-05 2024-06-05 Interaction method, device, equipment and storage medium based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN118312267B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115248896A (en) * 2022-07-25 2022-10-28 数效(深圳)科技有限公司 User portrait optimization method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10782986B2 (en) * 2018-04-20 2020-09-22 Facebook, Inc. Assisting users with personalized and contextual communication content
CN113781082B (en) * 2020-11-18 2023-04-07 京东城市(北京)数字科技有限公司 Method and device for correcting regional portrait, electronic equipment and readable storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115248896A (en) * 2022-07-25 2022-10-28 数效(深圳)科技有限公司 User portrait optimization method

Also Published As

Publication number Publication date
CN118312267A (en) 2024-07-09

Similar Documents

Publication Publication Date Title
CN110046304B (en) User recommendation method and device
US20230088171A1 (en) Method and apparatus for training search recommendation model, and method and apparatus for sorting search results
CN107368550A (en) Information acquisition method, device, medium, electronic equipment, server and system
CN116127203B (en) RPA service component recommendation method and system combining page information
CN115269512A (en) Object recommendation method, device and storage medium for realizing IA by combining RPA and AI
Long et al. Multi-task learning for collaborative filtering
CN116821457A (en) Intelligent consultation and public opinion processing system based on multi-mode large model
Wu et al. Deep collaborative filtering based on outer product
US20240281472A1 (en) Interactive interface with generative artificial intelligence
US20140310306A1 (en) System And Method For Pattern Recognition And User Interaction
CN117972231A (en) RPA project recommendation method, storage medium and electronic equipment
CN118312267B (en) Interaction method, device, equipment and storage medium based on artificial intelligence
CN114936279A (en) Unstructured chart data analysis method for collaborative manufacturing enterprise
CN113641900A (en) Information recommendation method and device
Li et al. A Survey of Multimodal Composite Editing and Retrieval
EP4336380A2 (en) A user-centric ranking algorithm for recommending content items
Qu et al. Knowledge enhanced bottom-up affordance grounding for robotic interaction
Wei et al. SpaceEditing: A Latent Space Editing Interface for Integrating Human Knowledge into Deep Neural Networks
CN116226678B (en) Model processing method, device, equipment and storage medium
CN118378166B (en) Garment system behavior data analysis method and system based on artificial intelligence
CN117592622B (en) Robot flow automation-oriented behavior sequence prediction method and system
CN118349664B (en) User portrait optimization method and system based on webpage semantic analysis
Tang et al. A feature-aware long-short interest evolution network for sequential recommendation
Trifan et al. A Combined Finite State Machine and PlantUML Approach to Machine Learning Applications
AU2023214216B2 (en) Framework for machine guidance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant