Nothing Special   »   [go: up one dir, main page]

CN110928987A - Legal provision retrieval method based on neural network hybrid model and related equipment - Google Patents

Legal provision retrieval method based on neural network hybrid model and related equipment Download PDF

Info

Publication number
CN110928987A
CN110928987A CN201910991657.1A CN201910991657A CN110928987A CN 110928987 A CN110928987 A CN 110928987A CN 201910991657 A CN201910991657 A CN 201910991657A CN 110928987 A CN110928987 A CN 110928987A
Authority
CN
China
Prior art keywords
vector
text
neural network
normalization
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910991657.1A
Other languages
Chinese (zh)
Other versions
CN110928987B (en
Inventor
于修铭
雷骏峰
刘嘉伟
陈晨
李可
汪伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201910991657.1A priority Critical patent/CN110928987B/en
Priority to PCT/CN2019/119314 priority patent/WO2021072892A1/en
Publication of CN110928987A publication Critical patent/CN110928987A/en
Application granted granted Critical
Publication of CN110928987B publication Critical patent/CN110928987B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3347Query execution using vector based model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3346Query execution using probabilistic model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application relates to the field of artificial intelligence and discloses a legal provision retrieval method based on a neural network hybrid model and related equipment, wherein the method comprises the following steps: acquiring an input text, and vectorizing the input text to acquire a first text vector and a second text vector; performing stack embedding on the first text vector and the second text vector to obtain a first circular vector and a second circular vector; splicing the first cyclic vector and the second cyclic vector to obtain a mixed vector, and performing stack embedding on the mixed vector to obtain a mixed stack vector; and carrying out normalization processing on the mixed stack vector to obtain a text retrieval result. According to the method and the device, the input text is subjected to multi-path input, multi-path input vectorization is performed, stack embedding cyclic operation is performed, the operated result is spliced and then subjected to stack embedding cyclic operation again, the retrieval result of the legal provision is obtained, and the accuracy of legal provision retrieval can be effectively improved.

Description

Legal provision retrieval method based on neural network hybrid model and related equipment
Technical Field
The application relates to the field of artificial intelligence, in particular to a legal provision retrieval method based on a neural network hybrid model and related equipment.
Background
Knowledge graph technology is increasingly becoming the basis of artificial intelligence, and is an important method for machine understanding of natural language and construction of knowledge network. In recent years, the application of knowledge graph in the judicial field is prosperous, the quick retrieval system of legal provisions can quickly retrieve the legal provisions on line according to the text content input by the user by depending on the legal knowledge graph, and the court judgment working quality and efficiency are improved.
The legal provision retrieval system is usually used by legal practitioners to retrieve related legal provisions according to information in cases, so that the case processing efficiency is improved, and the related legal provisions do not need to be searched manually; however, the current legal provision retrieval is usually performed by a natural language processing technology, and the adopted methods are mostly text similarity, keyword matching and the like, wherein the most typical method is a transform algorithm which is based on a transform model, and related legal provision information in a case can be obtained through the model, but the model can only learn the contents of the text above or below in the training process, so that the prediction accuracy is not high, and the time consumption is long.
Disclosure of Invention
The method comprises the steps of carrying out multi-path input on an input text, vectorizing the multi-path input, carrying out stack embedding cyclic operation, splicing the operated results, and then carrying out the stack embedding cyclic operation again to obtain the retrieval result of the legal provision, thereby effectively improving the accuracy of the retrieval of the legal provision.
In order to achieve the above purpose, the technical solution of the present application provides a legal provision retrieval method based on a neural network hybrid model and related devices.
The application discloses a legal provision retrieval method based on a neural network hybrid model, which comprises the following steps:
acquiring an input text, and vectorizing the input text to acquire a first text vector and a second text vector;
performing stack embedding on the first text vector and the second text vector to obtain a first circular vector and a second circular vector;
splicing the first cyclic vector and the second cyclic vector to obtain a mixed vector, and performing stack embedding on the mixed vector to obtain a mixed stack vector;
and carrying out normalization processing on the mixed stack vector to obtain a text retrieval result.
Preferably, the obtaining the input text and vectorizing the input text to obtain a first text vector and a second text vector includes:
acquiring an input text, and setting the input text as a first text;
performing entity linking on the first text to obtain elements in the first text, splicing the elements into a context, and setting the context as a second text;
and respectively vectorizing the first text and the second text to obtain a first text vector and a second text vector.
Preferably, the vectorizing the first text and the second text respectively to obtain a first text vector and a second text vector includes:
performing character segmentation on the first text and the second text to obtain each character in the first text and the second text;
and presetting the dimension of the vector, and respectively vectorizing each word of the first text and each word of the second text according to the dimension of the vector to obtain a first text vector and a second text vector.
Preferably, the stack embedding the first text vector and the second text vector to obtain a first circular vector and a second circular vector includes:
adding the position information of the first text vector and the first text vector to obtain a first position vector, and adding the position information of the second text vector and the second text vector to obtain a second position vector;
inputting the first position vector and the second position vector into a neural network model respectively for normalization processing to obtain a first normalization hidden vector and a second normalization hidden vector;
extracting features of the first normalization hidden vector and the second normalization hidden vector to obtain a first feature vector and a second feature vector;
inputting the first feature vector and the second feature vector into a neural network model for normalization processing to obtain a first normalized vector and a second normalized vector, inputting the first normalized vector and the second normalized vector into a self-attention neural network model for processing to obtain a first coding block vector and a second coding block vector, and performing cyclic processing on the first coding block vector and the second coding block vector to obtain a first cyclic vector and a second cyclic vector.
Preferably, the extracting features of the first normalized hidden vector and the second normalized hidden vector to obtain a first feature vector and a second feature vector includes:
inputting the first normalization hidden vector and the second normalization hidden vector into a neural network model for feature extraction, and adding the vectors subjected to feature extraction with the first position vector and the second position vector respectively to obtain a first feature hidden vector and a second feature hidden vector;
presetting a first cycle number, inputting the first characteristic hidden vector and the second characteristic hidden vector into a neural network model for normalization, inputting the vector obtained after normalization into the neural network model for characteristic extraction, respectively adding the vector obtained after characteristic extraction with the first position vector and the second position vector, and repeatedly executing the step according to the preset first cycle number to obtain the first characteristic vector and the second characteristic vector.
Preferably, the inputting the first normalized vector and the second normalized vector from an attention neural network model for processing to obtain a first encoded block vector and a second encoded block vector, and performing loop processing on the first encoded block vector and the second encoded block vector to obtain a first loop vector and a second loop vector includes:
inputting the first normalization vector and the second normalization vector into a self-attention neural network model for processing, and respectively adding vectors obtained after model processing with the first feature vector and the second feature vector to obtain a first coding block vector and a second coding block vector;
presetting a second cycle number, adding the first coding block vector and the second coding block vector with position information respectively to obtain a position vector, inputting the position vector into a neural network model for normalization to obtain a normalized hidden vector, performing feature extraction on the normalized hidden vector to obtain a feature vector, performing normalization on the feature vector to obtain a normalized vector, inputting the normalized vector into an attention neural network model for processing to obtain a first coding block vector and a second coding block vector, and repeatedly executing the step according to the preset second cycle number to obtain the first cycle vector and the second cycle vector.
Preferably, the normalizing the mixed stack vector to obtain a text retrieval result includes:
presetting a legal provision probability threshold;
inputting the mixed stack vector into a full connection layer of a convolutional neural network for linear processing to obtain a vector to be classified, and carrying out normalization processing on the vector to be classified to obtain the probability corresponding to each legal provision;
and comparing the probability corresponding to each legal provision with the preset legal provision probability threshold value, and outputting all the legal provisions larger than the legal provision probability threshold value.
The application also discloses a legal provision retrieval device based on neural network hybrid model, the device includes:
an acquisition module: setting to obtain an input text, and vectorizing the input text to obtain a first text vector and a second text vector;
a first stack module: stacking and embedding the first text vector and the second text vector to obtain a first circular vector and a second circular vector;
a second stack module: splicing the first cyclic vector and the second cyclic vector to obtain a mixed vector, and performing stack embedding on the mixed vector to obtain a mixed stack vector;
an output module: and setting to carry out normalization processing on the mixed stack vector to obtain a text retrieval result.
The application also discloses a computer device, which comprises a memory and a processor, wherein the memory is stored with computer readable instructions, and the computer readable instructions, when executed by one or more processors, cause one or more processors to execute the steps of the retrieval method.
The present application also discloses a storage medium readable and writable by a processor, the storage medium storing computer instructions, which, when executed by one or more processors, cause the one or more processors to perform the steps of the above-described retrieval method.
The beneficial effect of this application is: according to the method and the device, the input text is subjected to multi-path input, multi-path input vectorization is performed, stack embedding cyclic operation is performed, the operated result is spliced and then subjected to stack embedding cyclic operation again, the retrieval result of the legal provision is obtained, and the accuracy of legal provision retrieval can be effectively improved.
Drawings
Fig. 1 is a schematic flowchart of a legal provision retrieval method based on a neural network hybrid model according to a first embodiment of the present application;
FIG. 2 is a schematic flowchart of a legal provision retrieval method based on a neural network hybrid model according to a second embodiment of the present application;
fig. 3 is a schematic flowchart of a legal provision retrieval method based on a neural network hybrid model according to a third embodiment of the present application;
fig. 4 is a schematic flowchart of a legal provision retrieval method based on a neural network hybrid model according to a fourth embodiment of the present application;
fig. 5 is a schematic flowchart of a legal provision retrieval method based on a neural network hybrid model according to a fifth embodiment of the present application;
fig. 6 is a schematic flowchart of a legal provision retrieval method based on a neural network hybrid model according to a sixth embodiment of the present application;
fig. 7 is a schematic flowchart of a legal provision retrieval method based on a neural network hybrid model according to a seventh embodiment of the present application;
fig. 8 is a schematic structural diagram of a legal provision retrieval device based on a neural network hybrid model according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
A first embodiment of the present application is a legal provision retrieval method based on a neural network hybrid model, which is shown in fig. 1, and includes the following steps:
step s101, acquiring an input text, and vectorizing the input text to acquire a first text vector and a second text vector;
specifically, the input text is legal provision content of any length, and may be a complete sentence, for example: "what legal provisions need to be consulted in a loan relationship is sought? "the system can obtain the input text after the user inputs the sentence in the system.
Specifically, through text information input by a user, element information in the input text may be extracted through an entity linking technique, and the element information may include: dispute focus, facts elements, evidence, such as: "what legal provisions need to be consulted in a loan relationship is sought? In the text, the dispute focus is whether the loan relation is established, the factual element is whether the debit/debt/receipt/debit contract is made, and the evidence is the debit contract.
Specifically, after the input text is obtained, a first text vector and a second text vector can be obtained by vectorizing the input text and the element information in the text, respectively.
Step s102, performing stack embedding on the first text vector and the second text vector to obtain a first circular vector and a second circular vector;
specifically, the stack embedding includes performing an embedding operation on the first text vector and the second text vector, and executing a plurality of embedding operations in series to complete the stack embedding operation; when the embedding operation is executed, firstly, the position information in the first text vector and the second text vector is obtained, and the position information is calculated according to a calculation formula
Figure BDA0002238491190000071
Obtaining, wherein p represents the position of a word in a word vector, i represents the position of an element in a vector corresponding to each word in the word vector, and d represents a vector dimension; and then adding the position information with the first text vector and the second text vector respectively to obtain a first position vector and a second position vector.
Specifically, after the first position vector and the second position vector are obtained, the first position vector and the second position vector are input into a neural network model for normalization, and the normalization can be performed according to a formula
Figure BDA0002238491190000081
Performing the following steps, wherein mu is a mean value, sigma is a variance, a is a position vector, and H is the number of neurons in the neural network, so as to obtain a first normalization hidden vector and a second normalization hidden vector; and then inputting the first normalization hidden vector and the second normalization hidden vector into a convolutional neural network for feature extraction, wherein the feature extraction can be performed through a convolution kernel of the convolutional neural network, the feature extraction comprises the extraction of vector features, and after the vector features are extracted, the vectors after the feature extraction are respectively added with the first position vector and the second position vector to obtain a first feature hidden vector and a second feature hidden vector.
Specifically, after the first feature hidden vector and the second feature hidden vector are obtained, position information may be obtained from the first feature hidden vector and the second feature hidden vector, the position information is added to the first feature hidden vector and the second feature hidden vector, respectively, to obtain a new first position vector and a new second position vector, then the new first position vector and the new second position vector are input to a neural network for normalization processing, to obtain a new first normalized hidden vector and a new second normalized hidden vector, finally the new first normalized hidden vector and the new second normalized hidden vector are input to a convolutional neural network again for feature extraction, to obtain a new first feature hidden vector and a new second feature hidden vector, and the step of repeating N times is repeated N times, where N may be preset, for example, N is 6, when N is 6, a good result may be obtained, after the step is completed for N times, a first feature vector and a second feature vector are obtained.
Specifically, after the first feature vector and the second feature vector are obtained, the first feature vector and the second feature vector may be input into a neural network model again for normalization processing to obtain a first normalized vector and a second normalized vector, then the first normalized vector and the second normalized vector are input into an attention neural network model for calculation, and the calculated vectors are added to the first feature vector and the second feature vector respectively to obtain a first coding block vector and a second coding block vector; obtaining the encoded block vector means that the embedding operation is completed.
Specifically, after the first coding block vector and the second coding block vector are obtained, the first coding block vector, the second coding block vector and position information corresponding to the coding block vectors are added to obtain new first position information and second position information, the new first position information and the new second position information are input into a neural network model for normalization processing to obtain new first normalization hidden vector and second normalization hidden vector, the new first normalization hidden vector and the new second normalization hidden vector are input into the neural network model again for feature extraction to obtain new first feature vector and second feature vector, the new first feature vector and the new second feature vector are input into the neural network model for normalization processing to obtain new first normalization vector and second normalization vector, and the new first normalization vector and the new second normalization vector are input into an attention neural network model for calculation And adding the result obtained by the calculation to the new first eigenvector and the new second eigenvector respectively to obtain new first coded block vector and second coded block vector, and repeating the step N times, wherein the number of times N of repetition may be preset, for example, N equals 6, when N equals 6, a better result may be obtained, and when N times of this step is completed, the first circular vector and the second circular vector are obtained, and obtaining the circular vectors means that the stack embedding operation is completed.
Step s103, splicing the first cyclic vector and the second cyclic vector to obtain a mixed vector, and performing stack embedding on the mixed vector to obtain a mixed stack vector;
specifically, after the first cyclic vector and the second cyclic vector are obtained, the first cyclic vector and the second cyclic vector may be spliced to obtain a mixed vector, where the splicing is a splicing between vectors, for example, if the first cyclic vector is a 20 × 128-dimensional vector and the second cyclic vector is a 30 × 128-dimensional vector, the spliced vector is a 50 × 128-dimensional vector.
Specifically, after the hybrid vector is obtained, stack embedding may be performed on the hybrid vector, and the stack embedding may be performed in step s102, that is, the hybrid vector and the corresponding position information are added to obtain a new position vector, then the new position vector is normalized to obtain a new normalized hidden vector, then feature extraction is performed on the new normalized hidden vector to obtain a new feature vector, and the new feature vector is normalized again to obtain a new normalized vector, and finally the new normalized vector is input into the attention neural network model to be calculated, and the calculation result and the new feature vector are added to obtain a new coding block vector, and the new coding block vector is cyclically subjected to the foregoing steps to obtain a cyclic vector, where the final cyclic vector is a combined stack vector.
And step s104, performing normalization processing on the mixed stack vector to obtain a text retrieval result.
Specifically, after the mixed stack vector is obtained, linear processing may be performed on the mixed stack vector, where the linear processing includes inputting the mixed stack vector into a fully-connected layer of a convolutional neural network to perform linear processing, so as to obtain a vector to be classified, and the fully-connected layer may be regarded as matrix multiplication, for example: the input vector is a 128 x128 vector, [1, 2., 128 x128 ], and the matrix of the fully connected layers is a (128 x 128) × 4 matrix, then the result is a length (1,4) vector, and the purpose of linear processing on the mixed stack vector is to reduce the dimension, for example, the vector is reduced from 128 dimensions to 4 dimensions by linear processing in the above example, and the reduced dimension 4-dimensional vector is the vector to be classified. The vector dimension after passing through the full connection layer is the total number of the retrieved legal provisions, for example, if the total number of the retrieved legal provisions is 2000, the output vector is the vector of (1, 2000). Therefore, the construction of the full connection layer needs to be preset according to the number of legal provisions.
Specifically, after a vector to be classified is acquired, normalization processing may be performed on the vector to be classified, the normalization processing may be performed through a softmax function, and after the normalization processing is performed on the vector to be classified, a probability corresponding to each dimension may be output according to the dimension of the vector to be classified, where each dimension corresponds to one legal provision.
Specifically, a probability threshold of the legal provision may be preset, after the probability of each legal provision is obtained, the probabilities may be compared with the preset probability threshold, and if the probability is greater than the probability threshold, the legal provision corresponding to the probability is output, otherwise, the legal provision is not output.
In the embodiment, the retrieval result of the legal provision is obtained by performing multi-path input on the input text, vectorizing the multi-path input, performing stack embedding loop operation, splicing the operated result and performing the stack embedding loop operation again, and the accuracy of the retrieval of the legal provision can be effectively improved.
Fig. 2 is a schematic flowchart of a legal provision retrieval method based on a neural network hybrid model according to a second embodiment of the present application, where as shown in the drawing, in step s101, an input text is obtained, and vectorization is performed on the input text to obtain a first text vector and a second text vector, including:
step s201, acquiring an input text, and setting the input text as a first text;
specifically, after the input text is acquired, the input text may be copied and divided into two parts, and the input text may be set as the first text.
Step s202, performing entity linking on the first text to obtain elements in the first text, splicing the elements into a context, and setting the context as a second text;
specifically, the element information in the input text may be extracted by an entity linking technique, where the element information includes: dispute focus, facts elements, evidence, such as: "what legal provisions need to be consulted in a loan relationship is sought? In the text, the dispute focus is whether the loan relation is established, the factual elements are whether a debit/debt/receipt/debit contract is signed, and the evidence is a debit contract; all the element information is then spliced into a context, and the context may be set as a second text.
Step s203, vectorizing the first text and the second text respectively to obtain a first text vector and a second text vector.
Specifically, after a first text and a second text are obtained, vectorization can be performed on the first text and the second text respectively to obtain a first text vector and a second text vector.
In the embodiment, the input text is divided into two paths, the two paths of texts are processed differently and vectorized, so that more context information in the texts can be acquired, and the text retrieval effect is improved.
Fig. 3 is a schematic flowchart of a legal provision retrieval method based on a neural network hybrid model according to a third embodiment of the present application, as shown in the figure, in step s203, vectorizing the first text and the second text respectively to obtain a first text vector and a second text vector, including:
step s301, performing word segmentation on the first text and the second text to obtain each word in the first text and the second text;
specifically, the word segmentation may be performed by a word segmentation tool, and each word in the first text and the second text may be obtained after performing the word segmentation on the first text and the second text.
Step s302, presetting vector dimensions, and respectively vectorizing each word of the first text and each word of the second text according to the vector dimensions to obtain a first text vector and a second text vector.
Specifically, the vectorization may be performed by the word2Vec method, and the vector dimension may be set to 128 dimensions, for example, if the vectorized function is X ═ V (char), where char represents each word, V ("want") [ V1, V2., V128], and V ("want") is a 128-dimensional vector. The dimensionality of the text vector is used for determining the vector number of the words in the input text, and the dimensionality of the text vector can be preset, for example, set to 128, the word vector number of the current input text is 128, when the word vector number of the input text exceeds 128, word vectors exceeding 128 are directly deleted, and if the word vectors of the current input text are insufficient 128, insufficient word vectors are complemented by 0. For example: the user enters text: "what legal provisions need to be consulted in a loan relationship is sought? When words in the text are vectorized, X1 ═ V (' ideal '), X2 ═ V (' question '), X3 ═ V (' borrow '), X4 ═ V (' credit ') … …, X17 ═ V (' article '), X18 ═ V ('), X19 ═ 0,0,0, … …,0] … …, and X128 ═ 0,0,0, … …,0] can be obtained; after each word in the first text and the second text is vectorized, a first text vector and a second text vector can be obtained.
In the embodiment, the text is vectorized, so that the context information in the text can be better acquired, and the text retrieval can be more accurately realized.
Fig. 4 is a schematic flowchart of a legal provision retrieval method based on a neural network hybrid model according to a fourth embodiment of the present application, where as shown in the drawing, the step s102 of performing stack embedding on the first text vector and the second text vector to obtain a first circular vector and a second circular vector includes:
step s401, adding the position information of the first text vector and the first text vector to obtain a first position vector, and adding the position information of the second text vector and the second text vector to obtain a second position vector;
specifically, first, position information in the first text vector and the second text vector is obtained, and the position information is obtained according to a calculation formula
Figure BDA0002238491190000131
Obtaining, wherein p represents the position of a word in a word vector, i represents the position of an element in a vector corresponding to each word in the word vector, and d represents a vector dimension; and then adding the position information with the first text vector and the second text vector respectively to obtain a first position vector and a second position vector.
Step s402, inputting the first position vector and the second position vector into a neural network model for normalization processing, so as to obtain a first normalization hidden vector and a second normalization hidden vector;
specifically, after the first position vector and the second position vector are obtained, the first position vector and the second position vector are input into a neural network model for normalization, and the normalization can be performed according to a formula
Figure BDA0002238491190000132
And performing the following steps, wherein mu is a mean value, sigma is a variance, a is a position vector, and H is the number of the neurons in the neural network, thereby obtaining a first normalization hidden vector and a second normalization hidden vector.
Step s403, performing feature extraction on the first normalized hidden vector and the second normalized hidden vector to obtain a first feature vector and a second feature vector;
specifically, after the first normalization hidden vector and the second normalization hidden vector are obtained, the first normalization hidden vector and the second normalization hidden vector may be input to a convolutional neural network for feature extraction, the feature extraction may be performed by a convolution kernel of the convolutional neural network, the feature extraction includes extraction of vector features, and after the vector features are extracted, the first feature vector and the second feature vector are obtained.
Step s404, inputting the first feature vector and the second feature vector into a neural network model for normalization processing, so as to obtain a first normalized vector and a second normalized vector, inputting the first normalized vector and the second normalized vector from an attention neural network model for processing, so as to obtain a first coding block vector and a second coding block vector, and performing loop processing on the first coding block vector and the second coding block vector, so as to obtain a first loop vector and a second loop vector.
Specifically, after the first feature vector and the second feature vector are obtained, the first feature vector and the second feature vector may be input to a neural network model for normalization to obtain a first normalized vector and a second normalized vector, the first normalized vector and the second normalized vector are input to an attention neural network model for processing to obtain a first coding block vector and a second coding block vector, and then the first coding block vector and the second coding block vector are subjected to cyclic processing to obtain a first cyclic vector and a second cyclic vector.
In the embodiment, the text vectors are subjected to stack embedding operation, so that the collection and identification of text information can be improved, and the accuracy of text retrieval is improved.
Fig. 5 is a schematic flowchart of a legal provision retrieval method based on a neural network hybrid model according to a fifth embodiment of the present application, as shown in the drawing, in step s403, performing feature extraction on the first normalized hidden vector and the second normalized hidden vector to obtain a first feature vector and a second feature vector, including:
step s501, inputting the first normalized hidden vector and the second normalized hidden vector into a neural network model for feature extraction, and adding the vectors after feature extraction with the first position vector and the second position vector respectively to obtain a first feature hidden vector and a second feature hidden vector;
specifically, after the first normalized hidden vector and the second normalized hidden vector are obtained, the first normalized hidden vector and the second normalized hidden vector may be input to a convolutional neural network for feature extraction, the feature extraction may be performed by a convolutional kernel of the convolutional neural network, the feature extraction includes extraction of vector features, and after vector features are extracted, the vectors after feature extraction are respectively added to the first position vector and the second position vector to obtain a first feature hidden vector and a second feature hidden vector.
Step s502, presetting a first cycle number, inputting the first hidden feature vector and the second hidden feature vector into a neural network model for normalization, inputting the vectors obtained after normalization into the neural network model for feature extraction, adding the vectors obtained after feature extraction with the first position vector and the second position vector respectively, and repeatedly executing the step according to the preset first cycle number to obtain the first feature vector and the second feature vector.
Specifically, after the first feature hidden vector and the second feature hidden vector are obtained, position information may be obtained from the first feature hidden vector and the second feature hidden vector, the position information is added to the first feature hidden vector and the second feature hidden vector, respectively, to obtain a new first position vector and a new second position vector, then the new first position vector and the new second position vector are input to a neural network for normalization processing, to obtain a new first normalized hidden vector and a new second normalized hidden vector, finally the new first normalized hidden vector and the new second normalized hidden vector are input to a convolutional neural network again for feature extraction, to obtain a new first feature hidden vector and a new second feature hidden vector, and the step of repeating N times is repeated N times, where N may be preset, for example, N is 6, when N is 6, a good result may be obtained, when the step is repeatedly carried out, the output of the time is used as the input of the next time; after the step is completed for N times, a first feature vector and a second feature vector are obtained.
In the embodiment, the feature information in the text can be more accurately extracted by extracting the feature of the text vector, so that the accuracy of text retrieval is improved.
Fig. 6 is a schematic flowchart of a legal provision retrieval method based on a neural network hybrid model according to a sixth embodiment of the present application, where as shown in the drawing, the step s502 inputs the first normalized vector and the second normalized vector from an attention neural network model for processing, so as to obtain a first encoded block vector and a second encoded block vector, and performs a loop processing on the first encoded block vector and the second encoded block vector, so as to obtain a first loop vector and a second loop vector, and includes:
step s601, inputting the first normalization vector and the second normalization vector into a self-attention neural network model for processing, and adding vectors obtained after model processing with the first feature vector and the second feature vector respectively to obtain a first coding block vector and a second coding block vector;
specifically, after the first normalized vector and the second normalized vector are obtained, the first normalized vector and the second normalized vector may be input into an attention neural network model for calculation, and the calculated vectors are added to the first feature vector and the second feature vector, respectively, to obtain a first encoding block vector and a second encoding block vector.
Step s602, presetting a second cycle number, adding the first coding block vector and the second coding block vector with position information to obtain a position vector, inputting the position vector into a neural network model for normalization to obtain a normalized hidden vector, performing feature extraction on the normalized hidden vector to obtain a feature vector, performing normalization on the feature vector to obtain a normalized vector, inputting the normalized vector into an attention neural network model for processing to obtain a first coding block vector and a second coding block vector, and repeatedly executing the step according to the preset second cycle number to obtain the first cycle vector and the second cycle vector.
Specifically, after the first coding block vector and the second coding block vector are obtained, the first coding block vector, the second coding block vector and position information corresponding to the coding block vectors are added to obtain new first position information and second position information, the new first position information and the new second position information are input into a neural network model for normalization processing to obtain new first normalization hidden vector and second normalization hidden vector, the new first normalization hidden vector and the new second normalization hidden vector are input into the neural network model again for feature extraction to obtain new first feature vector and second feature vector, the new first feature vector and the new second feature vector are input into the neural network model for normalization processing to obtain new first normalization vector and second normalization vector, and the new first normalization vector and the new second normalization vector are input into an attention neural network model for calculation Adding the result obtained by calculation with the new first eigenvector and the new second eigenvector respectively to obtain new first coded block vector and second coded block vector, and repeating the step N times, wherein the number of times of repetition N may be preset, for example, N equals 6, when N equals 6, a better result can be obtained, and when the step is repeated, the current output is used as the next input; after the step is completed N times, a first cyclic vector and a second cyclic vector are obtained.
In this embodiment, the accuracy of text retrieval can be improved by performing stack embedding processing on the text vectors.
Fig. 7 is a schematic flowchart of a legal provision retrieval method based on a neural network hybrid model according to a seventh embodiment of the present application, where as shown in the drawing, in step s104, the normalization process is performed on the hybrid stack vector to obtain a text retrieval result, and the method includes:
step s701, presetting a legal provision probability threshold;
specifically, the probability threshold is used for excluding the legal provision with a low probability, and may be set in the system in advance.
Step s702, inputting the mixed stack vector into a full connection layer of a convolutional neural network for linear processing to obtain a vector to be classified, and performing normalization processing on the vector to be classified to obtain the probability corresponding to each legal provision;
specifically, after the mixed stack vector is obtained, linear processing may be performed on the mixed stack vector, where the linear processing includes inputting the mixed stack vector into a fully-connected layer of a convolutional neural network to perform linear processing, so as to obtain a vector to be classified, and the fully-connected layer may be regarded as matrix multiplication, for example: the input vector is a 128 x128 vector, [1, 2., 128 x128 ], and the matrix of the fully connected layers is a (128 x 128) × 4 matrix, then the result is a length (1,4) vector, and the purpose of linear processing on the mixed stack vector is to reduce the dimension, for example, the vector is reduced from 128 dimensions to 4 dimensions by linear processing in the above example, and the reduced dimension 4-dimensional vector is the vector to be classified. The vector dimension after passing through the full connection layer is the total number of the retrieved legal provisions, for example, if the total number of the retrieved legal provisions is 2000, the output vector is the vector of (1, 2000). Therefore, the construction of the full connection layer needs to be preset according to the number of legal provisions.
Specifically, after a vector to be classified is acquired, normalization processing may be performed on the vector to be classified, the normalization processing may be performed through a softmax function, and after the normalization processing is performed on the vector to be classified, a probability corresponding to each dimension may be output according to the dimension of the vector to be classified, where each dimension corresponds to one legal provision.
Step s703, comparing the probability corresponding to each legal provision with the preset legal provision probability threshold, and outputting all the legal provisions larger than the legal provision probability threshold.
Specifically, after the probability of each legal provision is obtained, the probability corresponding to each legal provision may be compared with a preset probability threshold, and if the probability is greater than the probability threshold, the legal provision corresponding to the probability is output, otherwise, the legal provision is not output.
In this embodiment, by setting a probability threshold and outputting legal provisions larger than the probability threshold, a text retrieval result can be obtained quickly.
A structure of a legal provision retrieval apparatus based on a neural network hybrid model according to an embodiment of the present application is shown in fig. 8, and includes:
an obtaining module 801, a first stack module 802, a second stack module 803 and an output module 804; the obtaining module 801 is connected to the first stack module 802, the first stack module 802 is connected to the second stack module 803, and the second stack module 803 is connected to the output module 804; the obtaining module 801 is configured to obtain an input text, and perform vectorization on the input text to obtain a first text vector and a second text vector; the first stack module 802 is configured to perform stack embedding on the first text vector and the second text vector to obtain a first cyclic vector and a second cyclic vector; the second stack module 803 is configured to splice the first cyclic vector and the second cyclic vector to obtain a mixed vector, and perform stack embedding on the mixed vector to obtain a mixed stack vector; the output module 804 is configured to perform normalization processing on the mixed stack vector to obtain a text retrieval result.
The embodiment of the application also discloses a computer device, which comprises a memory and a processor, wherein the memory stores computer readable instructions, and the computer readable instructions, when executed by one or more processors, cause the one or more processors to execute the steps in the retrieval method in the above embodiments.
The embodiment of the present application further discloses a storage medium, where the storage medium can be read and written by a processor, and the memory stores computer readable instructions, and the computer readable instructions, when executed by one or more processors, cause the one or more processors to execute the steps in the retrieval method in the foregoing embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the computer program is executed. The storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random Access Memory (RAM).
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A legal provision retrieval method based on a neural network hybrid model is characterized by comprising the following steps:
acquiring an input text, and vectorizing the input text to acquire a first text vector and a second text vector;
performing stack embedding on the first text vector and the second text vector to obtain a first circular vector and a second circular vector;
splicing the first cyclic vector and the second cyclic vector to obtain a mixed vector, and performing stack embedding on the mixed vector to obtain a mixed stack vector;
and carrying out normalization processing on the mixed stack vector to obtain a text retrieval result.
2. The method of claim 1, wherein the obtaining an input text, vectorizing the input text to obtain a first text vector and a second text vector comprises:
acquiring an input text, and setting the input text as a first text;
performing entity linking on the first text to obtain elements in the first text, splicing the elements into a context, and setting the context as a second text;
and respectively vectorizing the first text and the second text to obtain a first text vector and a second text vector.
3. The method of claim 2, wherein the vectorizing the first text and the second text to obtain a first text vector and a second text vector comprises:
performing character segmentation on the first text and the second text to obtain each character in the first text and the second text;
and presetting the dimension of the vector, and respectively vectorizing each word of the first text and each word of the second text according to the dimension of the vector to obtain a first text vector and a second text vector.
4. The method of claim 3, wherein the step of performing stack embedding on the first text vector and the second text vector to obtain a first circular vector and a second circular vector comprises:
adding the position information of the first text vector and the first text vector to obtain a first position vector, and adding the position information of the second text vector and the second text vector to obtain a second position vector;
inputting the first position vector and the second position vector into a neural network model respectively for normalization processing to obtain a first normalization hidden vector and a second normalization hidden vector;
extracting features of the first normalization hidden vector and the second normalization hidden vector to obtain a first feature vector and a second feature vector;
inputting the first feature vector and the second feature vector into a neural network model for normalization processing to obtain a first normalized vector and a second normalized vector, inputting the first normalized vector and the second normalized vector into a self-attention neural network model for processing to obtain a first coding block vector and a second coding block vector, and performing cyclic processing on the first coding block vector and the second coding block vector to obtain a first cyclic vector and a second cyclic vector.
5. The method of claim 4, wherein the extracting features from the first normalized hidden vector and the second normalized hidden vector to obtain a first feature vector and a second feature vector comprises:
inputting the first normalization hidden vector and the second normalization hidden vector into a neural network model for feature extraction, and adding the vectors subjected to feature extraction with the first position vector and the second position vector respectively to obtain a first feature hidden vector and a second feature hidden vector;
presetting a first cycle number, inputting the first characteristic hidden vector and the second characteristic hidden vector into a neural network model for normalization, inputting the vector obtained after normalization into the neural network model for characteristic extraction, respectively adding the vector obtained after characteristic extraction with the first position vector and the second position vector, and repeatedly executing the step according to the preset first cycle number to obtain the first characteristic vector and the second characteristic vector.
6. The method of claim 5, wherein the inputting the first normalized vector and the second normalized vector from the attention neural network model to obtain a first encoded block vector and a second encoded block vector, and the performing a loop process on the first encoded block vector and the second encoded block vector to obtain a first loop vector and a second loop vector comprises:
inputting the first normalization vector and the second normalization vector into a self-attention neural network model for processing, and respectively adding vectors obtained after model processing with the first feature vector and the second feature vector to obtain a first coding block vector and a second coding block vector;
presetting a second cycle number, adding the first coding block vector and the second coding block vector with position information respectively to obtain a position vector, inputting the position vector into a neural network model for normalization to obtain a normalized hidden vector, performing feature extraction on the normalized hidden vector to obtain a feature vector, performing normalization on the feature vector to obtain a normalized vector, inputting the normalized vector into an attention neural network model for processing to obtain a first coding block vector and a second coding block vector, and repeatedly executing the step according to the preset second cycle number to obtain the first cycle vector and the second cycle vector.
7. The legal provision retrieval method based on neural network hybrid model as claimed in claim 6, wherein the normalizing the hybrid stack vector to obtain text retrieval result comprises:
presetting a legal provision probability threshold;
inputting the mixed stack vector into a full connection layer of a convolutional neural network for linear processing to obtain a vector to be classified, and carrying out normalization processing on the vector to be classified to obtain the probability corresponding to each legal provision;
and comparing the probability corresponding to each legal provision with the preset legal provision probability threshold value, and outputting all the legal provisions larger than the legal provision probability threshold value.
8. A legal provision retrieval apparatus based on a neural network hybrid model, the apparatus comprising:
an acquisition module: setting to obtain an input text, and vectorizing the input text to obtain a first text vector and a second text vector;
a first stack module: stacking and embedding the first text vector and the second text vector to obtain a first circular vector and a second circular vector;
a second stack module: splicing the first cyclic vector and the second cyclic vector to obtain a mixed vector, and performing stack embedding on the mixed vector to obtain a mixed stack vector;
an output module: and setting to carry out normalization processing on the mixed stack vector to obtain a text retrieval result.
9. A computer device comprising a memory and a processor, the memory having stored therein computer-readable instructions which, when executed by one or more of the processors, cause the one or more processors to perform the steps of the retrieval method of any one of claims 1 to 7.
10. A storage medium readable by a processor, the storage medium storing computer instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of the retrieval method of any one of claims 1 to 7.
CN201910991657.1A 2019-10-18 2019-10-18 Legal provision retrieval method and related equipment based on neural network hybrid model Active CN110928987B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910991657.1A CN110928987B (en) 2019-10-18 2019-10-18 Legal provision retrieval method and related equipment based on neural network hybrid model
PCT/CN2019/119314 WO2021072892A1 (en) 2019-10-18 2019-11-19 Legal provision search method based on neural network hybrid model, and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910991657.1A CN110928987B (en) 2019-10-18 2019-10-18 Legal provision retrieval method and related equipment based on neural network hybrid model

Publications (2)

Publication Number Publication Date
CN110928987A true CN110928987A (en) 2020-03-27
CN110928987B CN110928987B (en) 2023-07-25

Family

ID=69849151

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910991657.1A Active CN110928987B (en) 2019-10-18 2019-10-18 Legal provision retrieval method and related equipment based on neural network hybrid model

Country Status (2)

Country Link
CN (1) CN110928987B (en)
WO (1) WO2021072892A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113282709B (en) * 2021-06-01 2022-11-04 平安国际智慧城市科技股份有限公司 Text matching method, device and equipment and computer readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018113498A1 (en) * 2016-12-23 2018-06-28 北京国双科技有限公司 Method and apparatus for retrieving legal knowledge
CN110110045A (en) * 2019-04-26 2019-08-09 腾讯科技(深圳)有限公司 A kind of method, apparatus and storage medium for retrieving Similar Text
CN110276068A (en) * 2019-05-08 2019-09-24 清华大学 Law merit analysis method and device
CN110275936A (en) * 2019-05-09 2019-09-24 浙江工业大学 A kind of similar law case retrieving method based on from coding neural network

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160232441A1 (en) * 2015-02-05 2016-08-11 International Business Machines Corporation Scoring type coercion for question answering
CN107679224B (en) * 2017-10-20 2020-09-08 竹间智能科技(上海)有限公司 Intelligent question and answer method and system for unstructured text
CN109271524B (en) * 2018-08-02 2021-10-15 中国科学院计算技术研究所 Entity linking method in knowledge base question-answering system
CN109446416B (en) * 2018-09-26 2021-09-28 南京大学 Law recommendation method based on word vector model
CN109829055B (en) * 2019-02-22 2021-03-12 苏州大学 User law prediction method based on filter door mechanism
CN110188192B (en) * 2019-04-16 2023-01-31 西安电子科技大学 Multi-task network construction and multi-scale criminal name law enforcement combined prediction method
CN110334189B (en) * 2019-07-11 2023-04-18 河南大学 Microblog topic label determination method based on long-time and short-time and self-attention neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018113498A1 (en) * 2016-12-23 2018-06-28 北京国双科技有限公司 Method and apparatus for retrieving legal knowledge
CN110110045A (en) * 2019-04-26 2019-08-09 腾讯科技(深圳)有限公司 A kind of method, apparatus and storage medium for retrieving Similar Text
CN110276068A (en) * 2019-05-08 2019-09-24 清华大学 Law merit analysis method and device
CN110275936A (en) * 2019-05-09 2019-09-24 浙江工业大学 A kind of similar law case retrieving method based on from coding neural network

Also Published As

Publication number Publication date
WO2021072892A1 (en) 2021-04-22
CN110928987B (en) 2023-07-25

Similar Documents

Publication Publication Date Title
CN108959246B (en) Answer selection method and device based on improved attention mechanism and electronic equipment
CN110825857B (en) Multi-round question and answer identification method and device, computer equipment and storage medium
CN111221944B (en) Text intention recognition method, device, equipment and storage medium
CN110990555B (en) End-to-end retrieval type dialogue method and system and computer equipment
CN113392191B (en) Text matching method and device based on multi-dimensional semantic joint learning
US20200364216A1 (en) Method, apparatus and storage medium for updating model parameter
CN111985228A (en) Text keyword extraction method and device, computer equipment and storage medium
CN112085091A (en) Artificial intelligence-based short text matching method, device, equipment and storage medium
CN112183111A (en) Long text semantic similarity matching method and device, electronic equipment and storage medium
CN113434682B (en) Text emotion analysis method, electronic device and storage medium
CN113673225A (en) Method and device for judging similarity of Chinese sentences, computer equipment and storage medium
CN116304748A (en) Text similarity calculation method, system, equipment and medium
CN117332788A (en) Semantic analysis method based on spoken English text
CN110413992A (en) A kind of semantic analysis recognition methods, system, medium and equipment
CN111325033B (en) Entity identification method, entity identification device, electronic equipment and computer readable storage medium
CN111611796A (en) Hypernym determination method and device for hyponym, electronic device and storage medium
CN115859302A (en) Source code vulnerability detection method, device, equipment and storage medium
CN111339256A (en) Method and device for text processing
CN115169342A (en) Text similarity calculation method and device, electronic equipment and storage medium
CN112307175B (en) Text processing method, text processing device, server and computer readable storage medium
CN115203388A (en) Machine reading understanding method and device, computer equipment and storage medium
CN112632287B (en) Electric power knowledge graph construction method and device
CN112307210B (en) Document tag prediction method, system, medium and electronic device
CN110928987B (en) Legal provision retrieval method and related equipment based on neural network hybrid model
CN113239693A (en) Method, device and equipment for training intention recognition model and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant