CN114357105B - Pre-training method and model fine-tuning method of geographic pre-training model - Google Patents
Pre-training method and model fine-tuning method of geographic pre-training model Download PDFInfo
- Publication number
- CN114357105B CN114357105B CN202210230756.XA CN202210230756A CN114357105B CN 114357105 B CN114357105 B CN 114357105B CN 202210230756 A CN202210230756 A CN 202210230756A CN 114357105 B CN114357105 B CN 114357105B
- Authority
- CN
- China
- Prior art keywords
- training
- model
- node
- geographic
- new
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012549 training Methods 0.000 title claims abstract description 340
- 238000000034 method Methods 0.000 title claims abstract description 92
- 238000005295 random walk Methods 0.000 claims abstract description 26
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 24
- 230000006870 function Effects 0.000 claims description 30
- 238000005516 engineering process Methods 0.000 claims description 23
- 238000013507 mapping Methods 0.000 claims description 21
- 238000006243 chemical reaction Methods 0.000 claims description 16
- 230000002776 aggregation Effects 0.000 claims description 14
- 238000004220 aggregation Methods 0.000 claims description 14
- 239000007787 solid Substances 0.000 claims description 7
- 230000000694 effects Effects 0.000 abstract description 13
- 230000008569 process Effects 0.000 abstract description 11
- 238000013473 artificial intelligence Methods 0.000 abstract description 3
- 238000013135 deep learning Methods 0.000 abstract description 2
- 239000010410 layer Substances 0.000 description 20
- 238000012545 processing Methods 0.000 description 15
- 238000010586 diagram Methods 0.000 description 13
- 238000004891 communication Methods 0.000 description 11
- 238000004590 computer program Methods 0.000 description 11
- 230000002159 abnormal effect Effects 0.000 description 4
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 239000000047 product Substances 0.000 description 2
- 239000011265 semifinished product Substances 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000004821 distillation Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 239000002346 layers by function Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000000819 phase cycle Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/29—Geographical information databases
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9537—Spatial or temporal dependent retrieval, e.g. spatiotemporal queries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Remote Sensing (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The disclosure provides a pre-training method and a model fine-tuning method of a geographic pre-training model, and relates to the technical field of artificial intelligence such as deep learning and graph structures. The method comprises the following steps: acquiring a sample node sequence, wherein the sample node sequence is generated based on a preset interest point heterogeneous graph and a random walk algorithm, the interest point heterogeneous graph comprises nodes served by interest points and edges connecting the nodes, the node names are the place names of the corresponding interest points, and the edges represent the association relation existing in the real world among the corresponding nodes; inputting a sample node sequence serving as a training sample into an initial geographical pre-training model; and controlling the initial geographical pre-training model to train according to a preset training target, and outputting the current geographical pre-training model reaching the training target as a target geographical pre-training model. By integrating heterogeneous and multi-modal geographic knowledge into the pre-training process of the model, the effect of geographic position related downstream tasks is improved.
Description
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to the field of artificial intelligence technologies such as deep learning and graph structure, and in particular, to a pre-training method for a geographic pre-training model, a model fine-tuning method for a geographic pre-training model, and a corresponding apparatus, an electronic device, a computer-readable storage medium, and a computer program product.
Background
The map domain is special, and the information processing process of the map domain needs to be associated with the real world. For example, in a map search engine, when a user inputs a search term (or query term), the position of a candidate Point of Interest (POI) and the distance between the POI and the current position of the user are very important ranking features.
Text data in the map domain mainly includes structured POI data, and the information included in such data is relatively simple and limited, and usually includes only names, aliases, addresses, and categories. Information with stronger relevance between the map field and the real world is often not represented by a text, so that the model is only pre-trained in the text dimension, and the space-time big data in the map field cannot be fully utilized to make certain relevance between the information contained in the pre-trained model and the real world.
Disclosure of Invention
The embodiment of the disclosure provides a pre-training method of a geographic pre-training model, a model fine-tuning method of the geographic pre-training model, and a corresponding device, electronic equipment, a computer-readable storage medium and a computer program product.
In a first aspect, an embodiment of the present disclosure provides a pre-training method for a geographic pre-training model, including: obtaining a sample node sequence; the method comprises the steps that a sample node sequence is generated based on a preset interest point heterogeneous graph and a random walk algorithm, the interest point heterogeneous graph comprises nodes and edges, the nodes serve as the interest points, the edges are connected with the nodes, the node names are the place names of the corresponding interest points, and the edges represent the association relation existing in the real world among the corresponding nodes; inputting a sample node sequence serving as a training sample into an initial geographical pre-training model; controlling an initial geographical pre-training model to train according to a preset training target, and outputting a current geographical pre-training model reaching the training target as a target geographical pre-training model; the training target comprises sub targets for guiding the model to learn the mapping relation between the place names of the interest points and preset position codes from the training samples, and the preset position codes correspond to geographical areas where the corresponding interest points are located in the real world.
In a second aspect, an embodiment of the present disclosure provides a pre-training apparatus for a geographic pre-training model, including: a sample node sequence acquisition unit configured to acquire a sample node sequence; the method comprises the steps that a sample node sequence is generated based on a preset interest point heterogeneous graph and a random walk algorithm, the interest point heterogeneous graph comprises nodes and edges, the nodes serve as the interest points, the edges are connected with the nodes, the node names are the place names of the corresponding interest points, and the edges represent the association relation existing in the real world among the corresponding nodes; a training sample input unit configured to input the sample node sequence as a training sample into an initial geographical pre-training model; the geographical pre-training model training unit is configured to control the initial geographical pre-training model to train according to a preset training target and output a current geographical pre-training model reaching the training target as a target geographical pre-training model; the training target comprises sub targets for guiding the model to learn the mapping relation between the place names of the interest points and preset position codes from the training samples, and the preset position codes correspond to geographical areas where the corresponding interest points are located in the real world.
In a third aspect, an embodiment of the present disclosure provides a model fine-tuning method for a geographic pre-training model, including: obtaining a target geographical pre-training model; the target geographical pre-training model is obtained according to the geographical pre-training model training method of any one of the first aspect; acquiring new function requirements of map application, and determining new training samples corresponding to the new function requirements; and on the basis of the target geographic pre-training model, generating a new geographic model corresponding to the new functional requirement through a model fine-tuning technology and a new training sample.
In a fourth aspect, an embodiment of the present disclosure provides a model fine-tuning device for a geographic pre-training model, including: a target geography pre-training model obtaining unit configured to obtain a target geography pre-training model; wherein the target geographical pre-training model is obtained according to the geographical pre-training model training device as any one of the second aspect; a new training sample determination unit configured to acquire a new function requirement of the map application and determine a new training sample corresponding to the new function requirement; and the new geographic model generation unit is configured to generate a new geographic model corresponding to the new functional requirement through a model fine-tuning technology and a new training sample on the basis of the target geographic pre-training model.
In a fifth aspect, an embodiment of the present disclosure provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method for pre-training a pre-training model of a geographical pre-training model as described in any one of the implementations of the first aspect or the method for model fine-tuning of a geographical pre-training model as described in any one of the implementations of the third aspect when executed by the at least one processor.
In a sixth aspect, the disclosed embodiments provide a non-transitory computer-readable storage medium storing computer instructions for enabling a computer to implement, when executed, a pre-training method of a geographic pre-training model as described in any implementation manner of the first aspect or a model fine-tuning method of a geographic pre-training model as described in any implementation manner of the third aspect.
In a seventh aspect, the disclosed embodiments provide a computer program product comprising a computer program, which when executed by a processor is capable of implementing the pre-training method for the pre-training model of the geography described in any of the implementation manners of the first aspect or the model fine-tuning method for the pre-training model of the geography described in any of the implementation manners of the third aspect.
It should be understood that the statements in this section are not intended to identify key or critical features of the embodiments of the present disclosure, nor are they intended to limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
Other features, objects and advantages of the present disclosure will become more apparent upon reading of the detailed description of non-limiting embodiments made with reference to the following drawings:
FIG. 1 is an exemplary system architecture to which the present disclosure may be applied;
FIG. 2 is a flowchart of a pre-training method of a pre-training model for geography according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of spatial knowledge of a point of interest provided by an embodiment of the present disclosure;
fig. 4 is a flowchart of a method for generating a sample node sequence according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram of a process for generating a point of interest heterogeneous graph according to an embodiment of the present disclosure;
fig. 6 is a schematic process diagram of processing a sample node sequence at different functional layers in a geographic pre-training model according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram of a training target for learning a mapping relationship between a text and a preset position code according to an embodiment of the present disclosure;
FIG. 8 is a flowchart of a model fine-tuning method for a pre-training geographic model according to an embodiment of the present disclosure;
fig. 9 is a block diagram of a pre-training apparatus of a geographic pre-training model according to an embodiment of the present disclosure;
fig. 10 is a block diagram illustrating a structure of a model fine-tuning device of a geographic pre-training model according to an embodiment of the present disclosure;
fig. 11 is a schematic structural diagram of an electronic device suitable for executing a pre-training method of a geographic pre-training model and/or a model fine-tuning method of the geographic pre-training model according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness. It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict.
In the technical scheme of the disclosure, the processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the related user all accord with the regulations of related laws and regulations, and do not violate the common customs of public order.
Fig. 1 illustrates an exemplary system architecture 100 to which embodiments of the pre-training and fine-tuning methods of the geographic pre-training models of the present application, and corresponding apparatus, electronic devices, and computer-readable storage media, may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. Network 104 is the medium used to provide communication links between terminal devices 101, 102, 103 and server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
A user may use terminal devices 101, 102, 103 to interact with a server 105 over a network 104 to receive or send messages or the like. Various applications for realizing information communication between the terminal devices 101, 102, 103 and the server 105, such as a model training application, a model tuning application, a map-related data processing application, etc., may be installed on the terminal devices 101, 102, 103 and the server.
The terminal devices 101, 102, 103 and the server 105 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices with display screens, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like; when the terminal devices 101, 102, and 103 are software, they may be installed in the electronic devices listed above, and they may be implemented as multiple software or software modules, or may be implemented as a single software or software module, and are not limited in this respect. When the server 105 is hardware, it may be implemented as a distributed server cluster composed of multiple servers, or may be implemented as a single server; when the server is software, the server may be implemented as a plurality of software or software modules, or may be implemented as a single software or software module, which is not limited herein.
The server 105 may provide various services through various built-in applications, taking a model training class application as an example of a pre-training service that may provide a geographic pre-training model, and when running the model training class application, the server 105 may achieve the following effects: firstly, obtaining a sample node sequence, wherein the sample node sequence is generated in advance based on a preset interest point heterogeneous graph and a random walk algorithm, the interest point heterogeneous graph comprises nodes and edges, the nodes are used as the nodes by the interest points, the edges are connected with the nodes, the node names are place names of the corresponding interest points, and the edges represent the association relation existing in the real world among the corresponding nodes; then, the sample node sequence is used as a training sample and input into an initial geographical pre-training model; and finally, controlling the initial geographical pre-training model to train according to a preset training target, and outputting the current geographical pre-training model reaching the training target as a target geographical pre-training model, wherein the training target comprises sub-targets for guiding the model to learn the mapping relation between the place name of the interest point and a preset position code from a training sample, and the preset position code corresponds to the geographical block where the corresponding interest point is located in the real world.
The target geographic pre-training model obtained through the model training process can be applied to the technical field (such as the map field) related to the geographic position in practice, so that the ever-increasing new functional requirements related to geographic knowledge in the technical fields are better met, and a new geographic model corresponding to the new functional requirements is quickly obtained on the basis of the geographic pre-training model through a model fine-tuning technology.
This process may be implemented by a model fine-tuning class application, and the server 105 may implement the following effects when running the model fine-tuning class application: firstly, acquiring a target geographical pre-training model; then, acquiring a new function requirement of the map application, and determining a new training sample corresponding to the new function requirement; and finally, generating a new geographic model corresponding to the new function requirement through a model fine-tuning technology and a new training sample on the basis of the target geographic pre-training model.
Both model training and model fine tuning need to occupy more computational resources and stronger computational capability, so the pre-training method for the pre-trained geographic model or the model fine tuning method for the pre-trained geographic model provided in the following embodiments of the present application are generally executed by the server 105 with stronger computational capability and more computational resources, and accordingly, the pre-training device for the pre-trained geographic model or the model fine tuning device for the pre-trained geographic model are also generally disposed in the server 105. However, it should be noted that when the terminal devices 101, 102, and 103 also have computing capabilities and computing resources meeting the requirements, the terminal devices 101, 102, and 103 may also complete the above-mentioned operations performed by the server 105 through the model training application or the model tuning application installed thereon, and then output the same result as the server 105. Correspondingly, a pre-training device of the pre-training model or a model fine-tuning device of the pre-training model may also be disposed in the terminal equipment 101, 102, 103. In such a case, the exemplary system architecture 100 may also not include the server 105 and the network 104.
In addition, the server (or terminal device) for training the target geographical pre-training model may be different from the server (or terminal device) for performing model fine-tuning operation based on the target geographical pre-training model, so as to segment different model operations. Specifically, the target geographic pre-training model or the new geographic model obtained by training through the server 105 may also obtain a lightweight model suitable for being embedded in the terminal devices 101, 102, and 103 in a model distillation manner, that is, the lightweight model in the terminal devices 101, 102, and 103 may be flexibly selected for use according to the recognition accuracy of the actual requirement, or the more complex model in the server 105 may be selected for use.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
To facilitate understanding of the technical solutions provided in the present disclosure, reference is first made to a flowchart of a pre-training method of a geographic pre-training model provided in fig. 2, wherein the process 200 includes the following steps.
Step 201: a sequence of sample nodes is obtained.
This step is intended to obtain, by an executive body of the pre-training method of the geographic pre-training model (for example, the server 105 shown in fig. 1), a sample node sequence generated based on a preset interest point anomaly map and a random walk algorithm. The interest point heterogeneous graph comprises nodes and edges, wherein the nodes are served by the interest points, the edges are connected with the nodes, the node names are the place names of the corresponding interest points, and the edges represent the association relation existing in the real world among the corresponding nodes.
It can be known that the place names of the interest points are usually represented in a text form, and in order to embody the position association between different nodes, the spatial knowledge usually represented in a digital form needs to be combined. Where the place name (toponym) mainly refers to the name of a geographical location entity such as a POI, street and region. While spatial knowledge mainly comprises a specific location of one geographical location entity (usually expressed in the form of geographical coordinates), spatial relationships between different geographical entities (usually expressed in the form of triples), and human movement trajectories (usually expressed in the form of ID sequences), as can be seen in the schematic diagram shown in fig. 3.
As can be seen from the above description of the place name knowledge and the spatial knowledge, two problems need to be overcome to utilize them: 1) heterogeneous data integration, namely organically combining texts (containing place name knowledge) with different modes with inputs such as numbers, triples and sequences (containing space knowledge) to serve as a unified input of a pre-training model; 2) the modal difference is that how to represent the data of different modalities in the same implicit space, so that the model can fully learn the knowledge contained in different modalities and can be fully applied in downstream tasks.
In the embodiment, the place name knowledge represented by text and the spatial knowledge represented by numbers in two different modes (i.e., isomerism) are organically combined in a graph mode, so that the unified input of the pre-training model is obtained, that is, the node names represent the place name knowledge, and the edges existing between the nodes represent the spatial knowledge, so that the isomerism graph contains the knowledge of the two modes.
The random walk algorithm (random walk) is also called a random walk algorithm, and the random walk idea and the like mean that future development steps and directions cannot be predicted based on past performances.
As the name implies, the sample node sequence generated by the random walk algorithm is a time-ordered sequence of nodes, and in the case of 10 different points of interest, named with numbers 01-10, an exemplary sample node sequence may appear as: 01-03-08-04, that is, a sample node sequence with a migration length of 4, passes through the interest point numbered 01, then passes through the interest point numbered 03, then passes through the interest point numbered 08, and finally passes through the interest point numbered 04 according to the time sequence. It should be noted that algorithm parameters such as the wandering length, the wandering direction, and the wandering weight of each edge may be set by themselves based on actual conditions and actual requirements, and are not specifically limited herein.
Step 202: and inputting the sample node sequence serving as a training sample into an initial geographical pre-training model.
On the basis of step 201, this step is intended to input each sample node sequence as a training sample into an initial geographic pre-training model by the executing subject. Specifically, according to the model characteristics of the initial geographic pre-training model, when the training sample is input into the initial geographic pre-training model, whether batch input or parallel input is supported can be checked, so that the input efficiency and the training efficiency of the training sample are improved.
Step 203: and controlling the initial geographical pre-training model to train according to a preset training target, and outputting the current geographical pre-training model reaching the training target as a target geographical pre-training model.
Based on step 202, the execution subject performs knowledge learning from the input training samples according to a preset training target, and finally outputs the current geographic pre-training model reaching the training target as a target geographic pre-training model. The training target is a target for guiding the model to learn knowledge from the training sample and what knowledge is learned, so that the required knowledge can be learned more accurately and better.
Since the sample node sequence simultaneously contains the place name knowledge represented by the text and the spatial knowledge represented by the numbers, the training targets can be divided into two corresponding targets, for example, a first sub-training target for guiding the learning of the place name knowledge represented in the text form and a second sub-training target for guiding the model to learn the mapping relation between the text and the entity represented by the model in the real-world coordinates, so that the spatial knowledge can be effectively learned. Considering that the searching difficulty of the mapping relation between the text and the real world coordinates is high, the mapping relation can be converted into the mapping relation between the learning text and the geographical block of the real world to which the corresponding interest point belongs, and the geographical block can adopt a preset position code determined based on the real world coordinates so as to reduce the searching difficulty of the mapping relation through the position code.
The training targets comprise sub-targets for guiding the model to learn the mapping relation between the place names of the interest points and preset position codes from the training samples, and the preset position codes correspond to geographical areas where the corresponding interest points are located in the real world.
The geographical pre-training model training method provided by the embodiment of the disclosure organically fuses place name knowledge represented in a text form and spatial knowledge represented in a digital form together by using a graph structure of a heterogeneous graph, so that modal differences existing in multi-modal geographical knowledge can be overcome, the initial geographical pre-training model capable of processing graph data can be used for better learning geographical knowledge of different modalities in the same implicit space, a better geographical pre-training model is further provided for downstream tasks related to geographical positions, and the task implementation effect of the downstream tasks is improved.
On the basis of the above embodiment, a pre-search node attached to each node may be added to the interest point heterogeneous graph, where the pre-search node records a received search term before the corresponding interest point is selected, so that through a pre-relationship between the pre-search node and the nodes, a generated sample node sequence includes the search term of the interest point, and further, the geographical pre-training model is combined with the search term in training, thereby improving accuracy and comprehensiveness of finding associations between different interest points.
In the above embodiment, the edges in the interest point heteromorphic graph are used for representing an association relationship existing in the real world between corresponding nodes, and the edges can be further divided into solid line edges and dotted line edges according to different types of association relationships, wherein the solid line edges are determined and obtained based on the interest point time sequence recorded in the user historical travel track, and the solid line edges represent travel logical associations between different nodes; the dashed edges characterize the same block associations between different nodes within the same geographic block. And then by adding the same-block association represented by the dotted line edge, more possible node sequences can be obtained due to a node replacement mode or a walking length lifting mode provided by the same-block association when a sample node sequence is generated on the basis of a random walking algorithm, and finally the model training effect is improved by lifting the order of magnitude of a training sample.
Referring to fig. 4, fig. 4 is a flowchart of a method for generating a sample node sequence according to an embodiment of the present disclosure, that is, a specific implementation manner is provided for how to obtain the sample node sequence required by step 201. Wherein the process 400 includes the following steps.
Step 401: a user search log and a point of interest database are obtained from a mapping application.
This step is intended to obtain the user search logs and the point of interest database from the mapping application by the executing agent (which may still be the server 105 shown in fig. 1, or may be a different server or other computing-capable device from the server 105).
The interest point database records the place name knowledge and the space knowledge of each interest point. Place name knowledge mainly refers to place names, and place names mainly refer to names of geographic location entities (such as POIs, streets and regions); the spatial knowledge mainly comprises a specific position of one geographical position entity (usually expressed in the form of geographical coordinates), a spatial relationship between different geographical entities (usually expressed in the form of a triplet), and a human movement trajectory (usually expressed in the form of an ID sequence), as shown in the schematic diagram of fig. 3.
Step 402: and extracting search words corresponding to each search of the user, actually selected interest points and an interest point time sequence corresponding to the user travel track from the user search log.
The interest point time sequence is obtained by arranging a plurality of interest points related in the user travel track according to the arrival time sequence.
Step 403: and taking each interest point as a node, and establishing a preposed search node attached to the corresponding node according to the corresponding search word.
Step 404: and establishing solid line edge connection between corresponding nodes with trip logic association according to the interest point time sequence.
Step 405: and establishing a dotted line edge connection between corresponding nodes associated with the blocks according to the boundary of each geographic block and the real world coordinates of each interest point in the spatial knowledge to obtain the interest point heterogeneous graph.
The actually constructed interest point abnormal graph can be seen in a schematic diagram shown in fig. 5. The point of interest shown in FIG. 5 contains two types of nodes: a POI node and a front search node (a search word input when a user selects a POI); three sides: the method comprises the steps of clicking an edge (searching-clicking-POI in the graph, namely a user searches POI by using the search word), co-occurrence of a plot (POI-co-occurrence-POI in the graph, namely two POI appear in the same block, and the block is obtained by dividing in advance through a dividing mode provided by an S2 geometry library), and moving track edge (starting point-to-ending point in the graph, namely two POI successively reached by the user).
Step 406: and carrying out random walk operation on the interest point abnormal graph through a random walk algorithm to obtain a sample node sequence.
Specifically, a large number of sample node sequences can be obtained quickly and efficiently through the random walk algorithm parameters set according to actual conditions.
The technical scheme provided by the embodiment starts from a user search log and an interest point database, and then expands the possible unrecorded travel track of the user by setting a front search node to introduce a search word into a sample node sequence and setting a solid line edge reflecting travel logic association and a dotted line edge reflecting same block association, so that the subsequently obtained sample phase sequence contains more valuable knowledge, the order of magnitude of the sample node sequence is increased, and finally the effect of improving the comprehensiveness and accuracy of the geographic pre-training model for learning related geographic knowledge is achieved together.
In order to improve the training effect of the target geographical pre-training model as much as possible and consider that the training sample is a node sequence, the initial geographical pre-training model may further include a first transformation (Tranformer) layer, a convergence (TranSAGE) layer, and a second transformation (Tranformer) layer, please refer to the schematic diagram shown in fig. 6. As shown in fig. 6, the first conversion layer (i.e., the transform (L12) shown in fig. 6) is configured to perform first feature coding on node information of each node constituting the sample node sequence, so as to obtain a node classification code (i.e., the transform (L12) shown in fig. 6) ) And node context coding (i.e., as shown in FIG. 6)) The aggregation layer (i.e. the TranSAGE layer shown in fig. 6) is configured to perform feature aggregation on the node classification code of each node in combination with the node classification codes of other nodes to obtain an aggregated node classification code (i.e. the node classification code shown in fig. 6)) The second conversion layer (i.e., a transform (L1) shown in fig. 6) is configured to perform second feature coding on the aggregated node classification coding and the node context coding of each node, respectively, and a result of each second feature coding is to perform training of a corresponding pre-training target according to a knowledge representation form included in the node information.
In order to facilitate understanding of the above technical solution, the above data processing procedure is further explained in detail by a specific operation manner.
Randomly walking on the interest point abnormal graph to obtain an input documentThen, each node in the network is firstly divided by a sensor-piece algorithmIs converted into a sequence of subwords (subwords). We then used the transform layer pairAnd (3) encoding:。
subsequently, a transformer-based aggregation layer is used for modeling a graph structure in an input sequence, and for efficient operation, only sequence aggregation representation of each node is used The following calculations were performed:
wherein,andare two linear layers according to different node classes and adapting different parameters.
Subsequently, the polymerized representation isAnd its original contextRepresenting end-to-end and modeled with another transform layer,,will be used to perform the pre-training task.
The above example is only a specific implementation combining the above ideas in a certain application scenario, and those skilled in the art can obtain a plurality of variants and adaptive adjustments based on the data processing ideas reflected by the first conversion layer, the aggregation layer, and the second conversion layer, and by combining different practical situations, which are not listed here.
On the basis of any of the above embodiments, in consideration of the fact that the mapping relationship between the text and the real world coordinates is difficult to find, the mapping relationship can be converted into the mapping relationship between the learning text and the geographic area of the real world to which the corresponding interest point belongs, and the geographic area can adopt the preset position code determined based on the real world coordinates, so that the searching difficulty of the mapping relationship is reduced through the position code.
One coding rule for the predetermined position code may be.
The real world is divided into a plurality of geographic zones according to a preset zone division mode (for example, the division standard provided by the S2 geometry library can be used).
Controlling each geographic zone to correspond to one coded Token (which may be called Token); the length of the coded token corresponds to the represented block division granularity level, the length of the coded token is increased by one every two steps of increase of the block division granularity level, and only the last bit of the coded token of the adjacent geographical block division granularity levels (for example, the levels are 2n-1 and 2 n) is coded differently.
In order to predict the coded tokens in multiple levels as efficiently as possible, the prediction task may be converted into a position prediction for each bit of code constituting the coded token, for example, the following three contents are predicted to correspond to tags thereof respectively: 1) characters of the coded token at 2n-1 level; 2) characters of the coded token at 2n level; 3) encoding the penultimate character shared by tokens at the 2n-1 and 2n levels.
As shown in fig. 7, the training goal is to learn the mapping relationship between the point-of-interest location name represented by the text and the geographic region where the point-of-interest location name is located in the real world for the pre-geographic training model, for example, the input is the D way X Y park in the C region of city B of the a country, and the output is the multi-level tokenized expression of the coordinates associated with the address (e.g., 35f1C, 35f1B, 35f1ac, 35f1a9 shown in fig. 7).
It should be noted that, this embodiment provides only an exemplary encoding rule of the preset position code, and both the rule and the specific details of the bit number of the preset position code may be adjusted by itself in combination with the actual situation, as long as the searching and difficulty of the mapping relationship between the text and the geographic area code can be reduced by the preset position code.
The embodiments described above illustrate how the pre-training of the geography pre-training model can be obtained through pre-training in various aspects, and the following description will describe that if the geography pre-training model is used as an available "middleware" or "semi-finished product" to provide assistance for other downstream tasks in the geography related technology field (such as the map field), so that the downstream tasks can obtain a new geography model with higher accuracy and better effect through a small amount of training of a small amount of samples on the basis of the "semi-finished product".
FIG. 8 provides a method for model tuning of a pre-trained geographic model through a flow 800, comprising the following steps.
Step 801: and acquiring a target geographical pre-training model.
Step 802: obtaining new function requirements of the map application, and determining new training samples corresponding to the new function requirements.
Step 803: and on the basis of the target geographic pre-training model, generating a new geographic model corresponding to the new functional requirement through a model fine-tuning technology and a new training sample.
The principle of the model fine-tuning technology is equivalent to that the model parameters of the target geographical pre-training model which is trained well before are used as the initialization model parameters of the new geographical model, so that the model has a better model structure directly in a parameter inheritance mode, and the premise of using the model fine-tuning technology is that the new function requirements are strongly related to the capability of the target geographical pre-training model, so that the new geographical model which has better effect and is used for realizing the new function requirements can be obtained quickly through a small number of new training samples.
Specifically, the target geographic pre-training model integrates place name knowledge and spatial knowledge of the interest points, and various associations existing among different interest points are found through learning, so that the effect can be improved through the method as long as new function requirements are related to the learned knowledge.
Two new functional requirements are introduced below.
Firstly, when the new function requirements are similar interest point recommendations, determining a user questionnaire corresponding to the similar interest point recommendations, and then generating a new training sample according to the content recorded in the user questionnaire;
And subsequently, on the basis of a target geographical pre-training model, generating a new geographical model for recommending similar interest points according to the current interest points through a model fine-tuning technology and a new training sample, namely, the new geographical model at the moment can recommend the similar interest points for the user according to the current interest points, and the new functional requirement can utilize the same block logical association (derived from the characteristics that the interest points of the same type usually have the appearance of 'bunching' and the 'aggregative property') learned by the target geographical pre-training model from the dotted line edge forming the interest point heterogeneous graph, and further recommend other interest points with the same type as the current interest points to the user based on the same block logical association.
Secondly, when the new function demand is strolling randomly, a small amount of new training samples can be obtained through questionnaires or other forms. And subsequently, on the basis of the target geographic pre-training model, generating a new geographic model for recommending other interest points according to the current interest points through a model fine-tuning technology and a new training sample. The new geographic model can recommend other interest points for the user according to the current interest point, the new function requirement can utilize travel logic association learned by the target geographic pre-training model from the solid line sides forming the interest point heterogeneous graph, and further recommend other interest points having travel logic relation with the current interest point to the user based on the travel logic association, so that the requirement of the user for randomly visiting and visiting is met through the association.
With further reference to fig. 9 and 10, as implementations of the methods shown in the above-mentioned figures, the present disclosure provides an embodiment of a pre-training device of a geographic pre-training model and an embodiment of a model fine-tuning device of the geographic pre-training model, respectively, where the embodiment of the pre-training device of the geographic pre-training model corresponds to the embodiment of the pre-training method of the geographic pre-training model shown in fig. 2, and the embodiment of the model fine-tuning device of the geographic pre-training model corresponds to the embodiment of the model fine-tuning method of the geographic pre-training model shown in fig. 8. The device can be applied to various electronic equipment in particular.
As shown in fig. 9, the pre-training apparatus 900 of the geographic pre-training model of the present embodiment may include: a sample node sequence acquisition unit 901, a training sample input unit 902, and a pre-training unit 903. Wherein, the sample node sequence obtaining unit 901 is configured to obtain a sample node sequence; the method comprises the steps that a sample node sequence is generated based on a preset interest point heterogeneous graph and a random walk algorithm, the interest point heterogeneous graph comprises nodes and edges, the nodes serve as the interest points, the edges are connected with the nodes, the node names are the place names of the corresponding interest points, and the edges represent the association relation existing in the real world among the corresponding nodes; a training sample input unit 902 configured to input a sample node sequence as a training sample into an initial geographical pre-training model; a pre-training unit 903 configured to control the initial geographic pre-training model to train according to a preset training target, and output a current geographic pre-training model that reaches the training target as a target geographic pre-training model; the training target comprises sub targets for guiding the model to learn the mapping relation between the place names of the interest points and preset position codes from the training samples, and the preset position codes correspond to geographical areas where the corresponding interest points are located in the real world.
In this embodiment, the pre-training apparatus 900 for pre-training the geographical model includes: the specific processing of the sample node sequence obtaining unit 901, the training sample input unit 902, and the pre-training unit 903 and the technical effects thereof can refer to the related descriptions of step 201 and step 203 in the corresponding embodiment of fig. 2, which are not described herein again.
In some optional implementations of this embodiment, the interest point heterogeneous graph may further include: and the preposed search nodes attached to the nodes record the received search terms before the corresponding interest points are selected.
In some optional implementation manners of this embodiment, the edges include a solid edge and a dotted edge, the solid edge is determined based on the point of interest time sequence recorded in the historical travel trajectory of the user, the solid edge represents a travel logical association between different nodes, and the dotted edge represents a same-block association between different nodes in the same geographic block.
In some optional implementations of this embodiment, the pre-training apparatus 900 for pre-training the geographic model may further include: a sample node sequence generating unit configured to generate a sample node sequence based on the interest point anomaly map and the random walk algorithm, the sample node sequence generating unit may be further configured to.
Obtaining a user search log and an interest point database from a map application; the interest point database records the place name knowledge and the space knowledge of each interest point.
And extracting search words corresponding to each search of the user, actually selected interest points and an interest point time sequence corresponding to the user travel track from the user search log.
And taking each interest point as a node, and establishing a preposed search node attached to the corresponding node according to the corresponding search word.
And establishing solid line edge connection between corresponding nodes with trip logic association according to the interest point time sequence.
And establishing a dotted line edge connection between corresponding nodes associated with the blocks according to the boundary of each geographic block and the real world coordinates of each interest point in the spatial knowledge to obtain the interest point heterogeneous graph.
And carrying out random walk operation on the interest point abnormal graph through a random walk algorithm to obtain a sample node sequence.
In some optional implementation manners of this embodiment, the initial geographic pre-training model includes a first conversion layer, an aggregation layer, and a second conversion layer, where the first conversion layer is configured to perform first feature coding on node information of each node constituting the sample node sequence, respectively, to obtain node classification codes and node context codes, the aggregation layer is configured to perform feature aggregation on the node classification code of each node in combination with node classification codes of other nodes, to obtain aggregated node classification codes, and the second conversion layer is configured to perform second feature coding on the aggregated node classification code and the node context code of each node, respectively.
In some optional implementations of this embodiment, the encoding rule of the preset position code includes: dividing the real world into a plurality of geographical blocks according to a preset block dividing mode; controlling each geographic zone to respectively correspond to one coding token; the length of the coded token corresponds to the represented block division granularity level, the length of the coded token is increased by one when the block division granularity level is increased by two levels, and the coded tokens of the adjacent geographical block division granularity levels only have the last bit with different codes.
As an apparatus embodiment corresponding to a method embodiment of a geographic pre-training model training method, the geographic pre-training model training apparatus provided in this embodiment organically fuses place name knowledge represented in a text form and spatial knowledge represented in a digital form with a graph structure of a heterogeneous graph, so as to overcome modal differences existing in multi-modal geographic knowledge, and can better learn geographic knowledge of different modalities in the same implicit space by means of an initial geographic pre-training model capable of processing graph data, thereby providing a better geographic pre-training model for a downstream task related to a geographic position, and improving a task implementation effect on the downstream task.
As shown in fig. 10, the model fine tuning apparatus 1000 of the geographic pre-training model of the present embodiment may include: a target geographical pre-training model obtaining unit 1001, a new training sample determining unit 1002, and a new geographical model generating unit 1003. The target geography pre-training model obtaining unit 1001 is configured to obtain a target geography pre-training model; wherein, the target geographical pre-training model is obtained according to the geographical pre-training model training device as shown in fig. 9; a new training sample determination unit 1002 configured to acquire a new function requirement of the map application, and determine a new training sample corresponding to the new function requirement; and the new geographic model generation unit 1003 is configured to generate a new geographic model corresponding to the new functional requirement through a model fine-tuning technology and a new training sample on the basis of the target geographic pre-training model.
In this embodiment, in the model fine-tuning apparatus 1000 of the geographic pre-training model: the specific processes of the target geographic pre-training model obtaining unit 1001, the new training sample determining unit 1002, and the new geographic model generating unit 1003 and the technical effects thereof may be respectively as described in the embodiment of the model fine-tuning method for the geographic pre-training model shown in fig. 8, and are not described herein again.
In some optional implementations of this embodiment, the new training sample determination unit 1002 may include a new training sample determination subunit configured to determine a new training sample corresponding to the new functional requirement, and the new training sample determination subunit may be further configured to.
And responding to the new function requirements for recommending the similar interest points, and determining a user questionnaire corresponding to the similar interest point recommendation.
A new training sample is generated from the user questionnaire.
Correspondingly, the new geographic model generation unit 1003 may be further configured to.
On the basis of a target geographical pre-training model, a new geographical model for recommending similar interest points according to the current interest points is generated through a model fine-tuning technology and a new training sample.
In some optional implementations of this embodiment, the new geographic model generation unit 1003 may be further configured to generate a new geographic model.
Responding to the fact that the new function requirement is randomly strolling, and on the basis of the target geographical pre-training model, generating a new geographical model for recommending other interest points in the same block according to the current interest point through a model fine-tuning technology and a new training sample.
As an apparatus embodiment corresponding to a method embodiment of a model fine-tuning method for a geographic pre-training model, the model fine-tuning apparatus for a geographic pre-training model provided in this embodiment can quickly obtain a new geographic model that is actually used to meet a new functional requirement based on a target geographic pre-training model that includes more geographic knowledge, in combination with a new functional requirement and a model fine-tuning technology, on the basis of the target geographic pre-training model.
According to an embodiment of the present disclosure, the present disclosure also provides an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can implement the pre-training method of the geographic pre-training model and/or the model fine-tuning method of the geographic pre-training model described in any of the above embodiments.
According to an embodiment of the present disclosure, there is also provided a readable storage medium storing computer instructions for enabling a computer to implement the pre-training method of the pre-training model and/or the model fine-tuning method of the pre-training model described in any of the above embodiments when executed.
The embodiments of the present disclosure provide a computer program product, which when executed by a processor can implement the pre-training method of the geographic pre-training model and/or the model fine-tuning method of the geographic pre-training model described in any of the above embodiments.
FIG. 11 shows a schematic block diagram of an example electronic device 1100 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 11, the device 1100 comprises a computing unit 1101, which may perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 1102 or a computer program loaded from a storage unit 1108 into a Random Access Memory (RAM) 1103. In the RAM 1103, various programs and data necessary for the operation of the device 1100 may also be stored. The calculation unit 1101, the ROM 1102, and the RAM 1103 are connected to each other by a bus 1104. An input/output (I/O) interface 1105 is also connected to bus 1104.
A number of components in device 1100 connect to I/O interface 1105, including: an input unit 1106 such as a keyboard, a mouse, and the like; an output unit 1107 such as various types of displays, speakers, and the like; a storage unit 1108 such as a magnetic disk, optical disk, or the like; and a communication unit 1109 such as a network card, a modem, a wireless communication transceiver, and the like. The communication unit 1109 allows the device 1100 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 1101 can be a variety of general purpose and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 1101 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and the like. The computing unit 1101 performs the various methods and processes described above, such as the pre-training method of the geographical pre-training model and/or the model fine-tuning method of the geographical pre-training model. For example, in some embodiments, the pre-training method of the pre-trained models and/or the model tuning method of the pre-trained models may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 1108. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 1100 via ROM 1102 and/or communication unit 1109. When the computer program is loaded into RAM 1103 and executed by the computing unit 1101, one or more steps of the pre-training method of the pre-trained models of geography and/or the model fine-tuning method of the pre-trained models of geography described above may be performed. Alternatively, in other embodiments, the computing unit 1101 may be configured by any other suitable means (e.g., by means of firmware) to perform a pre-training method of the geo-pre-trained model and/or a model fine-tuning method of the geo-pre-trained model.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server may be a cloud Server, which is also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service extensibility in a conventional physical host and Virtual Private Server (VPS) service.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.
Claims (18)
1. A pre-training method of a geographic pre-training model comprises the following steps:
obtaining a sample node sequence; the sample node sequence is generated based on a preset interest point heterogeneous graph and a random walk algorithm, the interest point heterogeneous graph comprises nodes served by interest points and edges connecting the nodes, the node names are place names of the corresponding interest points, and the edges represent incidence relations existing in the real world among the corresponding nodes;
Inputting the sample node sequence serving as a training sample into an initial geographical pre-training model; the initial geographical pre-training model comprises a first conversion layer, an aggregation layer and a second conversion layer, wherein the first conversion layer is used for respectively carrying out first feature coding on node information of each node forming the sample node sequence to obtain node classification codes and node context codes, the aggregation layer is used for carrying out feature aggregation on the node classification codes of each node in combination with the node classification codes of other nodes to obtain aggregated node classification codes, and the second conversion layer is used for respectively carrying out second feature coding on the aggregated node classification codes and the node context codes of each node;
controlling the initial geographical pre-training model to train according to a preset training target, and outputting a current geographical pre-training model reaching the training target as a target geographical pre-training model; the training target comprises sub-targets for guiding the model to learn the mapping relation between the place names of the interest points and preset position codes from the training samples, wherein the preset position codes correspond to geographical areas where the corresponding interest points are located in the real world.
2. The method of claim 1, wherein the point of interest anomaly map further comprises: and the preposed search node is attached to each node and records the received search terms before the corresponding interest points are selected.
3. The method of claim 1, wherein the edges comprise solid edges determined based on a point of interest time series recorded in a user historical travel trajectory, the solid edges characterizing travel logical associations between different nodes, and dashed edges characterizing block-wise associations between different nodes within a same geographic block.
4. The method of claim 1, further comprising: generating the sample node sequence based on the interest point anomaly map and the random walk algorithm, the generating the sample node sequence based on the interest point anomaly map and the random walk algorithm comprising:
acquiring a user search log and an interest point database from a map application; the interest point database records the place name knowledge and the space knowledge of each interest point;
extracting search words corresponding to each search of the user, actually selected interest points and interest point time sequences corresponding to the user travel track from the user search log;
Taking each interest point as a node, and establishing a preposed search node attached to the corresponding node according to the corresponding search word;
establishing solid line edge connection between corresponding nodes with trip logic association according to the interest point time sequence;
establishing a dotted line edge connection between corresponding nodes associated with the blocks according to the boundary of each geographic block and the real world coordinates of each interest point in the spatial knowledge to obtain the interest point heterogeneous graph;
and carrying out random walk operation on the interest point heteromorphic graph through the random walk algorithm to obtain the sample node sequence.
5. The method according to any one of claims 1-4, wherein the coding rule of the preset position code comprises: dividing the real world into a plurality of geographical blocks according to a preset block division mode; controlling each geographic zone to correspond to one coded token; the length of the coded token corresponds to the represented block division granularity level, the length of the coded token is increased by one when the block division granularity level is increased by two, and only the last bit of the coded token of the adjacent geographical block division granularity level is coded differently.
6. A model fine-tuning method for a geographic pre-training model comprises the following steps:
Acquiring a target geographical pre-training model; wherein the target geographical pre-training model is obtained according to the geographical pre-training model training method of any one of claims 1 to 5;
acquiring new function requirements of map application, and determining new training samples corresponding to the new function requirements;
and generating a new geographic model corresponding to the new functional requirement through a model fine-tuning technology and the new training sample on the basis of the target geographic pre-training model.
7. The method of claim 6, wherein the determining a new training sample corresponding to a new functional requirement comprises;
responding to the new function requirements for similar interest point recommendation, and determining a user questionnaire corresponding to the similar interest point recommendation;
generating the new training sample according to the user questionnaire;
correspondingly, on the basis of the target geographic pre-training model, generating a new geographic model corresponding to the new functional requirement through a model fine-tuning technology and the new training sample, including:
and generating a new geographic model for recommending similar interest points according to the current interest points through a model fine-tuning technology and the new training sample on the basis of the target geographic pre-training model.
8. The method of claim 6, wherein the generating a new geographic model corresponding to the new functional requirement through a model fine-tuning technique and the new training sample on the basis of the target geographic pre-training model comprises:
and responding to the fact that the new function requirement is randomly strolling, and on the basis of the target geographical pre-training model, generating a new geographic model for recommending other interest points according to the current interest point through a model fine-tuning technology and the new training sample.
9. A pre-training apparatus for pre-training a geographical model, comprising:
a sample node sequence acquisition unit configured to acquire a sample node sequence; the sample node sequence is generated based on a preset interest point heterogeneous graph and a random walk algorithm, the interest point heterogeneous graph comprises nodes served by interest points and edges connecting the nodes, the node names are place names of the corresponding interest points, and the edges represent incidence relations existing in the real world among the corresponding nodes;
a training sample input unit configured to input the sample node sequence as a training sample into an initial geographical pre-training model; the initial geographical pre-training model comprises a first conversion layer, an aggregation layer and a second conversion layer, wherein the first conversion layer is used for respectively carrying out first feature coding on node information of each node forming the sample node sequence to obtain node classification codes and node context codes, the aggregation layer is used for carrying out feature aggregation on the node classification codes of each node in combination with the node classification codes of other nodes to obtain aggregated node classification codes, and the second conversion layer is used for respectively carrying out second feature coding on the aggregated node classification codes and the node context codes of each node;
The pre-training unit is configured to control the initial geographical pre-training model to train according to a preset training target and output a current geographical pre-training model reaching the training target as a target geographical pre-training model; the training targets comprise sub-targets for guiding the model to learn the mapping relation between the place names of the interest points and preset position codes from the training samples, and the preset position codes correspond to the geographic areas of the corresponding interest points in the real world.
10. The apparatus of claim 9, wherein the point of interest exception map further comprises: and the preposed search node is attached to each node and records the received search terms before the corresponding interest points are selected.
11. The apparatus of claim 9, wherein the edges comprise solid edges determined based on a point of interest time series recorded in a user historical travel trajectory, the solid edges characterizing travel logical associations between different nodes, and dashed edges characterizing block-wise associations between different nodes within a same geographic block.
12. The apparatus of claim 9, further comprising: a sample node sequence generation unit configured to generate the sample node sequence based on the point of interest anomaly map and the random walk algorithm, the sample node sequence generation unit further configured to:
Obtaining a user search log and an interest point database from a map application; the interest point database records the place name knowledge and the space knowledge of each interest point;
extracting a search word corresponding to each search of the user, an actually selected interest point and an interest point time sequence corresponding to a user travel track from the user search log;
taking each interest point as a node, and establishing a preposed search node attached to the corresponding node according to the corresponding search word;
establishing solid line edge connection between corresponding nodes with trip logic association according to the interest point time sequence;
establishing a dotted line edge connection between corresponding nodes associated with the blocks according to the boundary of each geographic block and the real world coordinates of each interest point in the spatial knowledge to obtain the interest point heterogeneous graph;
and carrying out random walk operation on the interest point heteromorphic graph through the random walk algorithm to obtain the sample node sequence.
13. The apparatus according to any one of claims 9-12, wherein the coding rule of the preset position code comprises: dividing the real world into a plurality of geographical blocks according to a preset block dividing mode; controlling each geographic zone to correspond to one coded token; the length of the coded token corresponds to the represented block division granularity level, the length of the coded token is increased by one when the block division granularity level is increased by two, and only the last bit of the coded token of the adjacent geographical block division granularity level is coded differently.
14. A model fine-tuning apparatus for a pre-training model of geography, comprising:
a target geography pre-training model obtaining unit configured to obtain a target geography pre-training model; wherein the target geographical pre-training model is obtained according to the geographical pre-training model training device of any one of claims 9-13;
a new training sample determination unit configured to acquire a new function requirement of the map application and determine a new training sample corresponding to the new function requirement;
and the new geographic model generation unit is configured to generate a new geographic model corresponding to the new functional requirement through a model fine-tuning technology and the new training sample on the basis of the target geographic pre-training model.
15. The apparatus of claim 14, wherein the new training sample determination unit comprises a new training sample determination subunit configured to determine a new training sample corresponding to a new functional requirement, the new training sample determination subunit further configured to;
responding to the new function requirements for similar interest point recommendation, and determining a user questionnaire corresponding to the similar interest point recommendation;
generating the new training sample according to the user questionnaire;
Correspondingly, the new geographic model generation unit is further configured to:
and generating a new geographic model for recommending similar interest points according to the current interest points through a model fine-tuning technology and the new training sample on the basis of the target geographic pre-training model.
16. The apparatus of claim 14, wherein the new geographic model generation unit is further configured to:
and responding to the fact that the new function requirement is randomly strolling, and on the basis of the target geographical pre-training model, generating a new geographic model for recommending other interest points according to the current interest point through a model fine-tuning technology and the new training sample.
17. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a pre-training method of a geographical pre-training model as defined in any one of claims 1 to 5 and/or a model fine-tuning method of a geographical pre-training model as defined in any one of claims 6 to 8.
18. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the pre-training method of the geographical pre-training model of any one of claims 1-5 and/or the model fine-tuning method of the geographical pre-training model of any one of claims 6-8.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210230756.XA CN114357105B (en) | 2022-03-10 | 2022-03-10 | Pre-training method and model fine-tuning method of geographic pre-training model |
PCT/CN2022/113287 WO2023168909A1 (en) | 2022-03-10 | 2022-08-18 | Pre-training method and model fine-tuning method for geographical pre-training model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210230756.XA CN114357105B (en) | 2022-03-10 | 2022-03-10 | Pre-training method and model fine-tuning method of geographic pre-training model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114357105A CN114357105A (en) | 2022-04-15 |
CN114357105B true CN114357105B (en) | 2022-06-10 |
Family
ID=81094841
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210230756.XA Active CN114357105B (en) | 2022-03-10 | 2022-03-10 | Pre-training method and model fine-tuning method of geographic pre-training model |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN114357105B (en) |
WO (1) | WO2023168909A1 (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114357105B (en) * | 2022-03-10 | 2022-06-10 | 北京百度网讯科技有限公司 | Pre-training method and model fine-tuning method of geographic pre-training model |
CN114841282A (en) * | 2022-05-20 | 2022-08-02 | 北京百度网讯科技有限公司 | Training method of pre-training model, and generation method and device of solution model |
CN114998684B (en) * | 2022-05-20 | 2023-06-23 | 北京百度网讯科技有限公司 | Training method and positioning adjustment method for geographic and visual cross-mode pre-training model |
CN115186738B (en) * | 2022-06-20 | 2023-04-07 | 北京百度网讯科技有限公司 | Model training method, device and storage medium |
CN115620157B (en) * | 2022-09-21 | 2024-07-09 | 清华大学 | Method and device for learning characterization of satellite image |
CN118014065A (en) * | 2024-01-30 | 2024-05-10 | 新疆泽智信息技术有限公司 | Multi-mode heterogeneous admission data integration method based on knowledge graph |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110929162A (en) * | 2019-12-04 | 2020-03-27 | 腾讯科技(深圳)有限公司 | Recommendation method and device based on interest points, computer equipment and storage medium |
CN111522888A (en) * | 2020-04-22 | 2020-08-11 | 北京百度网讯科技有限公司 | Method and device for mining competitive relationship between interest points |
CN112069415A (en) * | 2020-08-13 | 2020-12-11 | 中国海洋大学 | Interest point recommendation method based on heterogeneous attribute network characterization learning |
CN112559885A (en) * | 2020-12-25 | 2021-03-26 | 北京百度网讯科技有限公司 | Method and device for determining training model of map interest point and electronic equipment |
CN113505306A (en) * | 2021-06-21 | 2021-10-15 | 广东交通职业技术学院 | Interest point recommendation method, system and medium based on heterogeneous graph neural network |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9009177B2 (en) * | 2009-09-25 | 2015-04-14 | Microsoft Corporation | Recommending points of interests in a region |
US9767565B2 (en) * | 2015-08-26 | 2017-09-19 | Digitalglobe, Inc. | Synthesizing training data for broad area geospatial object detection |
CN114357105B (en) * | 2022-03-10 | 2022-06-10 | 北京百度网讯科技有限公司 | Pre-training method and model fine-tuning method of geographic pre-training model |
-
2022
- 2022-03-10 CN CN202210230756.XA patent/CN114357105B/en active Active
- 2022-08-18 WO PCT/CN2022/113287 patent/WO2023168909A1/en unknown
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110929162A (en) * | 2019-12-04 | 2020-03-27 | 腾讯科技(深圳)有限公司 | Recommendation method and device based on interest points, computer equipment and storage medium |
CN111522888A (en) * | 2020-04-22 | 2020-08-11 | 北京百度网讯科技有限公司 | Method and device for mining competitive relationship between interest points |
CN112069415A (en) * | 2020-08-13 | 2020-12-11 | 中国海洋大学 | Interest point recommendation method based on heterogeneous attribute network characterization learning |
CN112559885A (en) * | 2020-12-25 | 2021-03-26 | 北京百度网讯科技有限公司 | Method and device for determining training model of map interest point and electronic equipment |
CN113505306A (en) * | 2021-06-21 | 2021-10-15 | 广东交通职业技术学院 | Interest point recommendation method, system and medium based on heterogeneous graph neural network |
Also Published As
Publication number | Publication date |
---|---|
WO2023168909A1 (en) | 2023-09-14 |
CN114357105A (en) | 2022-04-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114357105B (en) | Pre-training method and model fine-tuning method of geographic pre-training model | |
US11561943B2 (en) | Feature-based deduplication of metadata for places | |
CN110363449B (en) | Risk identification method, device and system | |
CN112560496A (en) | Training method and device of semantic analysis model, electronic equipment and storage medium | |
CN111400504B (en) | Method and device for identifying enterprise key people | |
US20170103337A1 (en) | System and method to discover meaningful paths from linked open data | |
US11829447B2 (en) | Resident area prediction method, apparatus, device, and storage medium | |
CN113947147B (en) | Training method, positioning method and related device of target map model | |
CN112580733B (en) | Classification model training method, device, equipment and storage medium | |
CN111553279B (en) | Method, device, equipment and storage medium for learning and identifying characterization of interest points | |
CN114329244A (en) | Map interest point query method, map interest point query device, map interest point query equipment, storage medium and program product | |
CN113641805A (en) | Acquisition method of structured question-answering model, question-answering method and corresponding device | |
CN114715145B (en) | Trajectory prediction method, device and equipment and automatic driving vehicle | |
US20220128372A1 (en) | Method for path planning, electronic device and storage medium | |
CN113158030B (en) | Recommendation method and device for remote interest points, electronic equipment and storage medium | |
CN113961720A (en) | Method for predicting entity relationship and method and device for training relationship prediction model | |
CN115186738B (en) | Model training method, device and storage medium | |
CN114416941B (en) | Knowledge graph-fused dialogue knowledge point determination model generation method and device | |
CN112861023B (en) | Map information processing method, apparatus, device, storage medium, and program product | |
CN111339446B (en) | Interest point mining method and device, electronic equipment and storage medium | |
CN114429801A (en) | Data processing method, training method, recognition method, device, equipment and medium | |
CN114661904A (en) | Method, apparatus, device, storage medium, and program for training document processing model | |
CN111753037A (en) | Information representation method and device, electronic equipment and storage medium | |
CN113572679B (en) | Account intimacy generation method and device, electronic equipment and storage medium | |
CN117522614B (en) | Data processing method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |