Nothing Special   »   [go: up one dir, main page]

CN110928987B - Legal provision retrieval method and related equipment based on neural network hybrid model - Google Patents

Legal provision retrieval method and related equipment based on neural network hybrid model Download PDF

Info

Publication number
CN110928987B
CN110928987B CN201910991657.1A CN201910991657A CN110928987B CN 110928987 B CN110928987 B CN 110928987B CN 201910991657 A CN201910991657 A CN 201910991657A CN 110928987 B CN110928987 B CN 110928987B
Authority
CN
China
Prior art keywords
vector
text
normalized
neural network
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910991657.1A
Other languages
Chinese (zh)
Other versions
CN110928987A (en
Inventor
于修铭
雷骏峰
刘嘉伟
陈晨
李可
汪伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201910991657.1A priority Critical patent/CN110928987B/en
Priority to PCT/CN2019/119314 priority patent/WO2021072892A1/en
Publication of CN110928987A publication Critical patent/CN110928987A/en
Application granted granted Critical
Publication of CN110928987B publication Critical patent/CN110928987B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3347Query execution using vector based model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3346Query execution using probabilistic model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application relates to the field of artificial intelligence, and discloses a legal provision retrieval method and related equipment based on a neural network hybrid model, wherein the method comprises the following steps: acquiring an input text, and vectorizing the input text to obtain a first text vector and a second text vector; stacking and embedding the first text vector and the second text vector to obtain a first cyclic vector and a second cyclic vector; splicing the first cyclic vector and the second cyclic vector to obtain a mixed vector, and stacking and embedding the mixed vector to obtain a mixed stack vector; and carrying out normalization processing on the mixed stack vector to obtain a text retrieval result. According to the method and the device, multiple paths of input texts are input, multiple paths of input vectorization are performed, stack embedding circulation operation is performed again after the calculated results are spliced, the legal provision retrieval result is obtained, and the legal provision retrieval accuracy can be effectively improved.

Description

Legal provision retrieval method and related equipment based on neural network hybrid model
Technical Field
The application relates to the field of artificial intelligence, in particular to a legal provision retrieval method and related equipment based on a neural network hybrid model.
Background
Knowledge graph technology is becoming the basis of artificial intelligence, which is an important method for machine understanding of natural language and construction of knowledge network. In recent years, the application of the knowledge graph in the judicial field is silently raised, and the rapid retrieval system of legal provision can rely on the legal knowledge graph to rapidly retrieve legal provision on line according to text content input by a user, so that the court trial work quality and efficiency are improved.
The legal provision retrieval system is generally used by law practitioners to retrieve relevant legal provision according to information in cases, so that the case processing efficiency is improved, and the relevant legal provision is not required to be manually read and searched; at present, legal provision retrieval is usually performed by a natural language processing technology, and most of the methods adopted are text similarity, keyword matching and the like, wherein the most typical method is a transformation algorithm, the algorithm is based on a transformation model, relevant legal provision information in a case can be obtained through the model, but the model can only learn the context or the context content of the text in the training process, so that the prediction accuracy is low and the time consumption is long.
Disclosure of Invention
Aiming at the defects of the prior art, the legal provision retrieval method and related equipment based on the neural network hybrid model are provided, by carrying out multi-path input on an input text, vectorizing the multi-path input, carrying out stack embedding circulation operation, splicing the operated results, and then carrying out stack embedding circulation operation again to obtain the legal provision retrieval result, so that the legal provision retrieval accuracy can be effectively improved.
In order to achieve the above purpose, the technical scheme of the application provides a legal provision retrieval method and related equipment based on a neural network hybrid model.
The application discloses a legal provision retrieval method based on a neural network hybrid model, which comprises the following steps:
acquiring an input text, and vectorizing the input text to obtain a first text vector and a second text vector;
stacking and embedding the first text vector and the second text vector to obtain a first cyclic vector and a second cyclic vector;
splicing the first cyclic vector and the second cyclic vector to obtain a mixed vector, and stacking and embedding the mixed vector to obtain a mixed stack vector;
And carrying out normalization processing on the mixed stack vector to obtain a text retrieval result.
Preferably, the obtaining the input text, vectorizing the input text, obtaining a first text vector and a second text vector, includes:
acquiring an input text, and setting the input text as a first text;
performing entity linking on the first text to obtain elements in the first text, splicing the elements into a context, and setting the context as a second text;
and vectorizing the first text and the second text respectively to obtain a first text vector and a second text vector.
Preferably, the vectorizing the first text and the second text to obtain a first text vector and a second text vector includes:
word segmentation is carried out on the first text and the second text, and each word in the first text and the second text is obtained;
and presetting the dimension of a vector, and respectively vectorizing each word of the first text and each word of the second text according to the dimension of the vector to obtain a first text vector and a second text vector.
Preferably, the stacking embedding the first text vector and the second text vector to obtain a first cyclic vector and a second cyclic vector includes:
Adding the first text vector and the position information of the first text vector to obtain a first position vector, and adding the second text vector and the position information of the second text vector to obtain a second position vector;
respectively inputting the first position vector and the second position vector into a neural network model for normalization processing to obtain a first normalized hidden vector and a second normalized hidden vector;
extracting features of the first normalized hidden vector and the second normalized hidden vector to obtain a first feature vector and a second feature vector;
and inputting the first characteristic vector and the second characteristic vector into a neural network model for normalization processing to obtain a first normalized vector and a second normalized vector, inputting the first normalized vector and the second normalized vector into a self-attention neural network model for processing to obtain a first coding block vector and a second coding block vector, and circularly processing the first coding block vector and the second coding block vector to obtain a first cyclic vector and a second cyclic vector.
Preferably, the feature extraction of the first normalized hidden vector and the second normalized hidden vector to obtain a first feature vector and a second feature vector includes:
Inputting the first normalized hidden vector and the second normalized hidden vector into a neural network model for feature extraction, and respectively adding the vectors after feature extraction with the first position vector and the second position vector to obtain a first feature hidden vector and a second feature hidden vector;
presetting a first cycle number, normalizing the first feature hidden vector and the second feature hidden vector input neural network model, inputting the normalized vector into the neural network model for feature extraction, adding the vector after feature extraction with the first position vector and the second position vector respectively, and repeatedly executing the steps according to the preset first cycle number to obtain the first feature vector and the second feature vector.
Preferably, the inputting the first normalized vector and the second normalized vector into the self-attention neural network model for processing to obtain a first encoded block vector and a second encoded block vector, and performing cyclic processing on the first encoded block vector and the second encoded block vector to obtain a first cyclic vector and a second cyclic vector, including:
inputting the first normalized vector and the second normalized vector into a self-attention neural network model for processing, and respectively adding the vectors obtained after the model processing with the first characteristic vector and the second characteristic vector to obtain a first coding block vector and a second coding block vector;
Presetting a second cycle number, adding the first coding block vector and the second coding block vector with position information to obtain a position vector, inputting the position vector into a neural network model for normalization processing to obtain a normalized hidden vector, extracting features of the normalized hidden vector to obtain a feature vector, normalizing the feature vector to obtain a normalized vector, inputting the normalized vector into a self-attention neural network model for processing to obtain a first coding block vector and a second coding block vector, and repeatedly executing the steps according to the preset second cycle number to obtain the first cycle vector and the second cycle vector.
Preferably, the normalizing the mixing stack vector to obtain a text search result includes:
presetting a legal provision probability threshold;
inputting the mixed stack vector into a full-connection layer of a convolutional neural network for linear processing to obtain a vector to be classified, and carrying out normalization processing on the vector to be classified to obtain a probability corresponding to each legal provision;
and comparing the probability corresponding to each legal provision with the preset legal provision probability threshold value, and outputting all legal provision larger than the legal provision probability threshold value.
The application also discloses legal provision retrieval device based on neural network hybrid model, the device includes:
the acquisition module is used for: the method comprises the steps of setting an input text, and vectorizing the input text to obtain a first text vector and a second text vector;
a first stack module: the first text vector and the second text vector are set to be embedded in a stack, and a first cyclic vector and a second cyclic vector are obtained;
a second stack module: the method comprises the steps of splicing the first cyclic vector and the second cyclic vector to obtain a mixed vector, and stacking and embedding the mixed vector to obtain a mixed stack vector;
and an output module: and carrying out normalization processing on the mixed stack vector to obtain a text retrieval result.
The application also discloses a computer device comprising a memory and a processor, the memory having stored therein computer readable instructions which, when executed by one or more of the processors, cause the one or more processors to perform the steps of the above-described retrieval method.
The application also discloses a storage medium readable and writable by a processor, the storage medium storing computer instructions that when executed by one or more processors cause the one or more processors to perform the steps of the retrieval method described above.
The beneficial effects of this application are: according to the method and the device, multiple paths of input texts are input, multiple paths of input vectorization are performed, stack embedding circulation operation is performed again after the calculated results are spliced, the legal provision retrieval result is obtained, and the legal provision retrieval accuracy can be effectively improved.
Drawings
FIG. 1 is a schematic flow chart of a legal provision retrieval method based on a neural network hybrid model according to a first embodiment of the present application;
FIG. 2 is a schematic flow chart of a legal provision retrieval method based on a neural network hybrid model according to a second embodiment of the present application;
FIG. 3 is a schematic flow chart of a legal provision retrieval method based on a neural network hybrid model according to a third embodiment of the present application;
FIG. 4 is a schematic flow chart of a legal provision retrieval method based on a neural network hybrid model according to a fourth embodiment of the present application;
FIG. 5 is a schematic flow chart of a legal provision retrieval method based on a neural network hybrid model according to a fifth embodiment of the present application;
FIG. 6 is a schematic flow chart of a legal provision retrieval method based on a neural network hybrid model according to a sixth embodiment of the present application;
FIG. 7 is a schematic flow chart of a legal provision retrieval method based on a neural network hybrid model according to a seventh embodiment of the present application;
fig. 8 is a schematic structural diagram of a legal provision retrieval device based on a neural network hybrid model according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
A legal provision retrieval method flow based on a neural network hybrid model in a first embodiment of the present application is shown in FIG. 1, and the embodiment includes the following steps:
Step S101, an input text is obtained, vectorization is carried out on the input text, and a first text vector and a second text vector are obtained;
specifically, the input text is legal provision content with any length, and may be a complete sentence, for example: "what legal provision needs to be referred to in the lending relationship? By the way, when the user inputs the sentence in the system, the system can acquire the input text.
Specifically, through text information input by a user, element information in the input text can be extracted through an entity linking technology, and the element information can include: dispute focus, small facts elements, evidence, for example: "what legal provision needs to be referred to in the lending relationship? In the text, the dispute focus is whether the lending relationship is established, the small fact element is whether the borrowing/arrearing/receipt/borrowing contract is signed, and the evidence is the borrowing contract.
Specifically, after the input text is obtained, the first text vector and the second text vector can be obtained by vectorizing the input text and the element information in the text respectively.
Step s102, performing stack embedding on the first text vector and the second text vector to obtain a first cyclic vector and a second cyclic vector;
Specifically, the stack embedding includes performing an embedding operation on the first text vector and the second text vector, and performing a plurality of embedding operations in series to complete a stack embedding operation; when the embedding operation is executed, firstly, the position information in the first text vector and the second text vector is acquired, and the position information is calculated according to a calculation formulaObtaining, wherein p represents the position of a word in a word vector, i represents the position of an element in a vector corresponding to each word in the word vector, and d represents the vector dimension; and then adding the position information with the first text vector and the second text vector respectively to obtain a first position vector and a second position vector.
Specifically, after the first position vector and the second position vector are obtained, the first position vector and the second position vector are input into a neural network model for normalization processing, and the normalization processing can be performed according to a formulaIs performed, wherein mu is the mean valueSigma is variance, a is a position vector, H is the number of neurons in the neural network, and therefore a first normalized hidden vector and a second normalized hidden vector are obtained; and then inputting the first normalized hidden vector and the second normalized hidden vector into a convolutional neural network to perform feature extraction, wherein the feature extraction can be performed through a convolutional kernel of the convolutional neural network, the feature extraction comprises extraction of vector features, and after the vector features are extracted, the vectors after the feature extraction are respectively added with the first position vector and the second position vector to obtain the first feature hidden vector and the second feature hidden vector.
Specifically, after the first feature hidden vector and the second feature hidden vector are obtained, position information can be obtained from the first feature hidden vector and the second feature hidden vector, the position information and the first feature hidden vector and the second feature hidden vector are respectively added to obtain a new first position vector and a new second position vector, the new first position vector and the new second position vector are input into the neural network to perform normalization processing to obtain a new first normalization hidden vector and a new second normalization hidden vector, the new first normalization hidden vector and the new second normalization hidden vector are input into the convolutional neural network again to perform feature extraction to obtain a new first feature hidden vector and a new second feature hidden vector, and the step of repeating is repeated N times, wherein the number of times of repetition N can be preset, for example, n=6, when N times of the step is completed, a good result can be obtained, and the first feature vector and the second feature vector are obtained.
Specifically, after the first feature vector and the second feature vector are obtained, the first feature vector and the second feature vector can be input into a neural network model again to perform normalization processing to obtain a first normalized vector and a second normalized vector, then the first normalized vector and the second normalized vector are input into a self-attention neural network model to perform calculation, and the calculated vectors are respectively added with the first feature vector and the second feature vector to obtain a first coding block vector and a second coding block vector; obtaining the encoded block vector means that the embedding operation is completed.
Specifically, after the first coding block vector and the second coding block vector are obtained, adding the first coding block vector and the second coding block vector to the position information corresponding to the coding block vector to obtain new first position information and second position information, inputting the new first position information and the new second position information into a neural network model for normalization processing to obtain a new first normalized hidden vector and a new second normalized hidden vector, inputting the new first normalized hidden vector and the new second normalized hidden vector into the convolutional neural network model again for feature extraction to obtain a new first feature vector and a new second feature vector, and inputting the new first feature vector and the new second feature vector into the neural network model for normalization processing to obtain a new first normalization vector and a new second normalization vector, inputting the new first normalization vector and the new second normalization vector into the self-attention neural network model for calculation, respectively adding the calculated results with the new first feature vector and the new second feature vector to obtain a new first coding block vector and a new second coding block vector, and repeating the steps for N times, wherein the number of times of repetition N can be preset, for example N=6, when N=6, a better result can be obtained, and when N times of the steps are completed, a first cyclic vector and a second cyclic vector are obtained, and the cyclic vector is obtained, namely the stack embedding operation is completed.
Step s103, splicing the first cyclic vector and the second cyclic vector to obtain a mixed vector, and performing stack embedding on the mixed vector to obtain a mixed stack vector;
specifically, after the first cyclic vector and the second cyclic vector are obtained, the first cyclic vector and the second cyclic vector may be spliced to obtain a mixed vector, where the splicing is a splicing between vectors, for example, if the first cyclic vector is a vector of 20 x 128 dimensions and the second cyclic vector is a vector of 30 x 128 dimensions, the spliced vector is a vector of 50 x 128 dimensions.
Specifically, after the mixed vector is obtained, stack embedding may be performed on the mixed vector, where the stack embedding operation may be performed according to the manner of step s102, that is, adding the mixed vector and the position information corresponding to the mixed vector to obtain a new position vector, normalizing the new position vector to obtain a new normalized hidden vector, then performing feature extraction on the new normalized hidden vector to obtain a new feature vector, normalizing the new feature vector again to obtain a new normalized vector, finally inputting the new normalized vector into the self-attention neural network model to perform calculation, adding the calculation result and the new feature vector to obtain a new coding block vector, and circularly performing the new coding block vector to obtain a cyclic vector, where the final cyclic vector is the integrated stack vector.
Step s104, normalizing the mixed stack vector to obtain a text retrieval result.
Specifically, after the mixed stack vector is obtained, the mixed stack vector may be subjected to linear processing, where the linear processing includes inputting the mixed stack vector into a fully connected layer of a convolutional neural network to perform linear processing, so as to obtain a vector to be classified, where the fully connected layer may be regarded as a matrix multiplication, for example: the input vector is a 128×128 vector, [1,2,..128×128], and the matrix of the full-connection layer is a (128×128) ×4 matrix, then the obtained result is a length (1, 4) vector, and the purpose of the linear processing of the hybrid stack vector is to reduce the dimension, for example, the dimension of the vector is reduced from 128 dimensions to 4 dimensions after the linear processing in the above example, and the dimension-reduced 4-dimensional vector is the vector to be classified. The vector dimension after passing through the full connection layer is the total number of the retrieved legal provision, for example, if the total number of the retrieved legal provision is 2000, the output vector is the vector of (1, 2000). Therefore, the construction of the full-connection layer is preset according to the number of legal regulations.
Specifically, after the vector to be classified is obtained, normalization processing can be performed on the vector to be classified, the normalization processing can be performed through a softmax function, and after the vector to be classified is subjected to normalization processing, probability corresponding to each dimension is output according to the dimension of the vector to be classified, wherein each dimension corresponds to one legal provision.
Specifically, a probability threshold of the legal provision can be preset, after the probability of each legal provision is obtained, the probability can be compared with the preset probability threshold respectively, if the probability is larger than the probability threshold, the legal provision corresponding to the probability is output, otherwise, the legal provision is not output.
In this embodiment, by performing multiple inputs on the input text, vectorizing the multiple inputs, performing stack embedding circulation operation, and performing stack embedding circulation operation again after splicing the operated results, a search result of legal provision is obtained, so that the accuracy of legal provision search can be effectively improved.
Fig. 2 is a schematic flow chart of a legal provision retrieval method based on a neural network hybrid model according to a second embodiment of the present application, as shown in the drawing, in step s101, an input text is obtained, vectorization is performed on the input text, and a first text vector and a second text vector are obtained, including:
Step s201, acquiring an input text, and setting the input text as a first text;
specifically, after the input text is acquired, the input text may be duplicated in two and set as the first text.
Step s202, performing entity linking on the first text to obtain elements in the first text, splicing the elements into a context, and setting the context as a second text;
specifically, element information in the input text can be extracted through entity linking technology, and the element information comprises: dispute focus, small facts elements, evidence, for example: "what legal provision needs to be referred to in the lending relationship? In the text, the dispute focus is whether a lending relation is established, the small fact element is whether a borrowing/arrearing/receipt/borrowing contract is signed, and the evidence is a borrowing contract; all the element information is then stitched into a context and the context can be set to a second text.
Step s203, vectorizing the first text and the second text respectively to obtain a first text vector and a second text vector.
Specifically, after the first text and the second text are obtained, vectorization can be performed on the first text and the second text respectively, so as to obtain a first text vector and a second text vector.
In this embodiment, by dividing the input text into two paths and performing different processing and vectorization on the two paths of text, the context information in the text can be obtained more, and the search effect of the text is improved.
Fig. 3 is a schematic flow chart of a legal provision retrieval method based on a neural network hybrid model according to a third embodiment of the present application, as shown in the fig. 3, the step s203 of vectorizing the first text and the second text to obtain a first text vector and a second text vector, respectively, includes:
step s301, performing word segmentation on the first text and the second text to obtain each word in the first text and the second text;
specifically, the word segmentation may be performed by a word segmentation tool, and each word in the first text and the second text may be obtained after the word segmentation is performed on the first text and the second text.
Step s302, presetting the dimension of a vector, and respectively vectorizing each word of the first text and each word of the second text according to the dimension of the vector to obtain a first text vector and a second text vector.
Specifically, the vectorization may be performed by a word2Vec method, and the vector may be set to 128 dimensions, for example, if the vectorization function is x=v (char), where char represents each word, V ("want") = [ V1, V2,..v 128], and V ("want") is a 128-dimensional vector. The dimension of the text vector is used for determining the number of words in the input text, and the dimension of the text vector can be preset, for example, set to 128, the number of words in the input text is 128, when the number of words in the input text exceeds 128, the words exceeding 128 are deleted directly, and if the number of words in the input text is less than 128, the insufficient words are complemented with 0. For example: the user enters text: "what legal provision needs to be referred to in the lending relationship? By vectorizing words in the text, x1=v (' want '), x2=v (' ask '), x3=v (' borrow '), x4=v (' lending ') … …, x17=v (' ven '), x18=v ('; and after vectorizing each word in the first text and the second text, obtaining a first text vector and a second text vector.
In this embodiment, by vectorizing the text, the context information in the text can be better obtained, and text retrieval can be more accurately implemented.
Fig. 4 is a flowchart of a legal provision retrieval method based on a neural network hybrid model according to a fourth embodiment of the present application, as shown in the fig. 4, in step s102, stack embedding is performed on the first text vector and the second text vector to obtain a first cyclic vector and a second cyclic vector, which includes:
step s401, adding the first text vector and the position information of the first text vector to obtain a first position vector, and adding the second text vector and the position information of the second text vector to obtain a second position vector;
specifically, first, position information in the first text vector and the second text vector is acquired, and the position information is calculated according to a calculation formulaObtaining, wherein p represents the position of a word in a word vector, i represents the position of an element in a vector corresponding to each word in the word vector, and d represents the vector dimension; and then adding the position information with the first text vector and the second text vector respectively to obtain a first position vector and a second position vector.
Step s402, respectively inputting the first position vector and the second position vector into a neural network model for normalization processing to obtain a first normalized hidden vector and a second normalized hidden vector;
specifically, after the first position vector and the second position vector are obtained, the first position vector and the second position vector are input into a neural network model for normalization processing, and the normalization processing can be performed according to a formulaAnd performing, wherein mu is a mean value, sigma is a variance, a is a position vector, and H is the number of neurons in the neural network, so as to obtain a first normalized hidden vector and a second normalized hidden vector.
Step s403, extracting features of the first normalized hidden vector and the second normalized hidden vector to obtain a first feature vector and a second feature vector;
specifically, after the first normalized hidden vector and the second normalized hidden vector are obtained, the first normalized hidden vector and the second normalized hidden vector can be input into a convolutional neural network to perform feature extraction, the feature extraction can be performed through a convolutional kernel of the convolutional neural network, the feature extraction comprises extraction of vector features, and after the vector features are extracted, the first feature vector and the second feature vector are obtained.
Step s404, inputting the first feature vector and the second feature vector into a neural network model for normalization processing to obtain a first normalized vector and a second normalized vector, inputting the first normalized vector and the second normalized vector into a self-attention neural network model for processing to obtain a first coding block vector and a second coding block vector, and performing cyclic processing on the first coding block vector and the second coding block vector to obtain a first cyclic vector and a second cyclic vector.
Specifically, after the first feature vector and the second feature vector are obtained, the first feature vector and the second feature vector may be input into a neural network model to perform normalization processing to obtain a first normalized vector and a second normalized vector, the first normalized vector and the second normalized vector are input into a self-attention neural network model to perform processing to obtain a first coding block vector and a second coding block vector, and then the first coding block vector and the second coding block vector are subjected to cyclic processing to obtain a first cyclic vector and a second cyclic vector.
In this embodiment, by performing stack embedding operation on the text vector, collection and recognition of text information can be improved, and accuracy of text retrieval can be improved.
Fig. 5 is a schematic flow chart of a legal provision retrieval method based on a neural network hybrid model according to a fifth embodiment of the present application, as shown in the fig. 5, in the step s403, feature extraction is performed on the first normalized hidden vector and the second normalized hidden vector, to obtain a first feature vector and a second feature vector, which includes:
step s501, inputting the first normalized hidden vector and the second normalized hidden vector into a neural network model for feature extraction, and adding the vectors after feature extraction with the first position vector and the second position vector respectively to obtain a first feature hidden vector and a second feature hidden vector;
specifically, after the first normalized hidden vector and the second normalized hidden vector are obtained, the first normalized hidden vector and the second normalized hidden vector can be input into a convolutional neural network to perform feature extraction, the feature extraction can be performed through a convolutional kernel of the convolutional neural network, the feature extraction comprises extraction of vector features, and after the vector features are extracted, the vectors after the feature extraction are respectively added with the first position vector and the second position vector to obtain the first feature hidden vector and the second feature hidden vector.
Step s502, presetting a first cycle number, normalizing the first feature hidden vector and the second feature hidden vector input neural network model, inputting the normalized vector into the neural network model for feature extraction, adding the feature extracted vector to the first position vector and the second position vector respectively, and repeatedly executing the step according to the preset first cycle number to obtain the first feature vector and the second feature vector.
Specifically, after the first feature hidden vector and the second feature hidden vector are obtained, position information can be obtained from the first feature hidden vector and the second feature hidden vector, the position information is respectively added with the first feature hidden vector and the second feature hidden vector to obtain a new first position vector and a new second position vector, then the new first position vector and the new second position vector are input into a neural network to perform normalization processing to obtain a new first normalization hidden vector and a new second normalization hidden vector, finally the new first normalization hidden vector and the new second normalization hidden vector are input into the convolutional neural network again to perform feature extraction to obtain a new first feature hidden vector and a new second feature hidden vector, and the repeated steps are repeated for N times, wherein the repeated times N can be preset, for example, n=6, when n=6 can obtain a better result, and when the step is repeated, the output of the time is used as the next input; after completing this step N times, a first feature vector and a second feature vector are obtained.
In this embodiment, feature information in the text can be extracted more accurately by extracting features from the text vector, so that the accuracy of text retrieval is improved.
Fig. 6 is a schematic flow chart of a legal provision retrieval method based on a neural network hybrid model according to a sixth embodiment of the present application, as shown in the fig. s502, the step s502 of inputting the first normalized vector and the second normalized vector into a self-attention neural network model for processing to obtain a first encoded block vector and a second encoded block vector, and the step s of circularly processing the first encoded block vector and the second encoded block vector to obtain a first cyclic vector and a second cyclic vector includes:
step s601, inputting the first normalized vector and the second normalized vector into a self-attention neural network model for processing, and adding the vectors obtained after the model processing with the first feature vector and the second feature vector respectively to obtain a first coding block vector and a second coding block vector;
specifically, after the first normalized vector and the second normalized vector are obtained, the first normalized vector and the second normalized vector may be input into a self-attention neural network model to perform calculation, and the calculated vectors may be respectively added to the first feature vector and the second feature vector to obtain a first coding block vector and a second coding block vector.
Step s602, presetting a second cycle number, adding the first coding block vector and the second coding block vector with position information to obtain a position vector, inputting the position vector into a neural network model for normalization processing to obtain a normalized hidden vector, extracting features of the normalized hidden vector to obtain a feature vector, normalizing the feature vector to obtain a normalized vector, inputting the normalized vector into a self-attention neural network model for processing to obtain a first coding block vector and a second coding block vector, and repeatedly executing the steps according to the preset second cycle number to obtain the first and second cyclic vectors.
Specifically, after the first coding block vector and the second coding block vector are obtained, adding the first coding block vector and the second coding block vector to the position information corresponding to the coding block vector to obtain new first position information and second position information, inputting the new first position information and the new second position information into a neural network model for normalization processing to obtain a new first normalized hidden vector and a new second normalized hidden vector, inputting the new first normalized hidden vector and the new second normalized hidden vector into the convolutional neural network model again for feature extraction to obtain a new first feature vector and a new second feature vector, then inputting the new first feature vector and the new second feature vector into the neural network model for normalization processing to obtain a new first normalization vector and a new second normalization vector, finally inputting the new first normalization vector and the new second normalization vector into the self-attention neural network model for calculation, adding the calculated results with the new first feature vector and the new second feature vector respectively to obtain a new first coding block vector and a new second coding block vector, and repeating the step N times, wherein the number of times N of repetition can be preset, for example, N=6, when N=6, a better result can be obtained, and when the step is repeated, the current output is used as the next input; after completing this step N times, a first cyclic vector and a second cyclic vector are obtained.
In this embodiment, the accuracy of text retrieval can be improved by performing stack embedding processing on the text vector.
Fig. 7 is a schematic flow chart of a legal provision retrieval method based on a neural network hybrid model according to a seventh embodiment of the present application, as shown in the drawing, in step s104, normalization processing is performed on the hybrid stack vector to obtain a text retrieval result, which includes:
step s701, presetting a legal provision probability threshold;
specifically, the probability threshold is used for removing legal provision with lower probability, and can be preset in the system.
Step s702, inputting the mixed stack vector into a full connection layer of a convolutional neural network for linear processing to obtain a vector to be classified, and normalizing the vector to be classified to obtain a probability corresponding to each legal provision;
specifically, after the mixed stack vector is obtained, the mixed stack vector may be subjected to linear processing, where the linear processing includes inputting the mixed stack vector into a fully connected layer of a convolutional neural network to perform linear processing, so as to obtain a vector to be classified, where the fully connected layer may be regarded as a matrix multiplication, for example: the input vector is a 128×128 vector, [1,2,..128×128], and the matrix of the full-connection layer is a (128×128) ×4 matrix, then the obtained result is a length (1, 4) vector, and the purpose of the linear processing of the hybrid stack vector is to reduce the dimension, for example, the dimension of the vector is reduced from 128 dimensions to 4 dimensions after the linear processing in the above example, and the dimension-reduced 4-dimensional vector is the vector to be classified. The vector dimension after passing through the full connection layer is the total number of the retrieved legal provision, for example, if the total number of the retrieved legal provision is 2000, the output vector is the vector of (1, 2000). Therefore, the construction of the full-connection layer is preset according to the number of legal regulations.
Specifically, after the vector to be classified is obtained, normalization processing can be performed on the vector to be classified, the normalization processing can be performed through a softmax function, and after the vector to be classified is subjected to normalization processing, probability corresponding to each dimension is output according to the dimension of the vector to be classified, wherein each dimension corresponds to one legal provision.
Step s703, comparing the probability corresponding to each legal provision with the preset legal provision probability threshold, and outputting all legal provision larger than the legal provision probability threshold.
Specifically, after the probability of each legal provision is obtained, the probability corresponding to each legal provision can be compared with a preset probability threshold respectively, if the probability is larger than the probability threshold, the legal provision corresponding to the probability is output, otherwise, the probability is not output.
In this embodiment, a text retrieval result can be obtained quickly by setting a probability threshold and outputting legal provision larger than the probability threshold.
The legal provision retrieval device structure based on the neural network hybrid model in the embodiment of the application is shown in fig. 8, and includes:
An acquisition module 801, a first stack module 802, a second stack module 803, and an output module 804; the acquiring module 801 is connected to the first stack module 802, the first stack module 802 is connected to the second stack module 803, and the second stack module 803 is connected to the output module 804; the obtaining module 801 is configured to obtain an input text, and vectorize the input text to obtain a first text vector and a second text vector; the first stack module 802 is configured to perform stack embedding on the first text vector and the second text vector to obtain a first cyclic vector and a second cyclic vector; the second stack module 803 is configured to splice the first cyclic vector and the second cyclic vector to obtain a hybrid vector, and perform stack embedding on the hybrid vector to obtain a hybrid stack vector; the output module 804 is configured to normalize the mixed stack vector to obtain a text retrieval result.
The embodiment of the application also discloses a computer device, which comprises a memory and a processor, wherein the memory stores computer readable instructions, and the computer readable instructions are executed by one or more processors, so that the one or more processors execute the steps in the retrieval method in the above embodiments.
The embodiments also disclose a storage medium readable and writable by a processor, where the memory stores computer readable instructions that when executed by one or more processors cause the one or more processors to perform the steps in the retrieval method described in the above embodiments.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in accordance with the embodiments may be accomplished by way of a computer program stored in a computer-readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. The storage medium may be a nonvolatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a random access Memory (Random Access Memory, RAM).
The technical features of the above-described embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above-described embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (7)

1. The legal provision retrieval method based on the neural network hybrid model is characterized by comprising the following steps of:
acquiring an input text, and vectorizing the input text to obtain a first text vector and a second text vector;
stacking and embedding the first text vector and the second text vector to obtain a first cyclic vector and a second cyclic vector;
splicing the first cyclic vector and the second cyclic vector to obtain a mixed vector, and stacking and embedding the mixed vector to obtain a mixed stack vector;
normalizing the mixed stack vector to obtain a text retrieval result
The obtaining the input text, vectorizing the input text, obtaining a first text vector and a second text vector, including:
Acquiring an input text, and setting the input text as a first text;
performing entity linking on the first text to obtain elements in the first text, splicing the elements into a context, and setting the context as a second text;
vectorizing the first text and the second text respectively to obtain a first text vector and a second text vector;
the stacking embedding of the first text vector and the second text vector to obtain a first cyclic vector and a second cyclic vector includes:
adding the first text vector and the position information of the first text vector to obtain a first position vector, and adding the second text vector and the position information of the second text vector to obtain a second position vector;
respectively inputting the first position vector and the second position vector into a neural network model for normalization processing to obtain a first normalized hidden vector and a second normalized hidden vector;
extracting features of the first normalized hidden vector and the second normalized hidden vector to obtain a first feature vector and a second feature vector;
inputting the first characteristic vector and the second characteristic vector into a neural network model for normalization processing to obtain a first normalized vector and a second normalized vector, inputting the first normalized vector and the second normalized vector into a self-attention neural network model for processing to obtain a first coding block vector and a second coding block vector, and circularly processing the first coding block vector and the second coding block vector to obtain a first cyclic vector and a second cyclic vector;
The normalizing process is performed on the mixing stack vector to obtain a text retrieval result, which comprises the following steps:
presetting a legal provision probability threshold;
inputting the mixed stack vector into a full-connection layer of a convolutional neural network for linear processing to obtain a vector to be classified, and carrying out normalization processing on the vector to be classified to obtain a probability corresponding to each legal provision;
and comparing the probability corresponding to each legal provision with the preset legal provision probability threshold value, and outputting all legal provision larger than the legal provision probability threshold value.
2. The legal provision retrieval method based on a neural network hybrid model of claim 1, wherein vectorizing the first text and the second text, respectively, to obtain a first text vector and a second text vector comprises:
word segmentation is carried out on the first text and the second text, and each word in the first text and the second text is obtained;
and presetting the dimension of a vector, and respectively vectorizing each word of the first text and each word of the second text according to the dimension of the vector to obtain a first text vector and a second text vector.
3. The legal provision retrieval method based on a neural network hybrid model of claim 1, wherein the feature extraction of the first normalized hidden vector and the second normalized hidden vector to obtain a first feature vector and a second feature vector comprises:
inputting the first normalized hidden vector and the second normalized hidden vector into a neural network model for feature extraction, and respectively adding the vectors after feature extraction with the first position vector and the second position vector to obtain a first feature hidden vector and a second feature hidden vector;
presetting a first cycle number, normalizing the first feature hidden vector and the second feature hidden vector input neural network model, inputting the normalized vector into the neural network model for feature extraction, adding the vector after feature extraction with the first position vector and the second position vector respectively, and repeatedly executing the steps according to the preset first cycle number to obtain the first feature vector and the second feature vector.
4. The legal provision retrieval method based on a neural network hybrid model of claim 3, wherein inputting the first normalized vector and the second normalized vector into a self-attention neural network model for processing to obtain a first encoded block vector and a second encoded block vector, and performing cyclic processing on the first encoded block vector and the second encoded block vector to obtain a first cyclic vector and a second cyclic vector, comprises:
Inputting the first normalized vector and the second normalized vector into a self-attention neural network model for processing, and respectively adding the vectors obtained after the model processing with the first characteristic vector and the second characteristic vector to obtain a first coding block vector and a second coding block vector;
presetting a second circulation number, adding the first coding block vector and the second coding block vector with position information respectively to obtain a position vector, inputting the position vector into a neural network model for normalization processing to obtain a normalized hidden vector, extracting features of the normalized hidden vector to obtain a feature vector, normalizing the feature vector to obtain a normalized vector, inputting the normalized vector into a self-attention neural network model for processing to obtain a first coding block vector and a second coding block vector, and repeatedly executing the steps according to the preset second circulation number to obtain the first circulation vector and the second circulation vector.
5. A legal provision retrieval device based on a neural network hybrid model, the device comprising:
the acquisition module is used for: the method comprises the steps of setting an input text, and vectorizing the input text to obtain a first text vector and a second text vector;
A first stack module: the first text vector and the second text vector are set to be embedded in a stack, and a first cyclic vector and a second cyclic vector are obtained;
a second stack module: the method comprises the steps of splicing the first cyclic vector and the second cyclic vector to obtain a mixed vector, and stacking and embedding the mixed vector to obtain a mixed stack vector;
and an output module: the mixed stack vector is set to be normalized, and a text retrieval result is obtained;
the acquisition module is specifically configured to acquire an input text, and set the input text as a first text; performing entity linking on the first text to obtain elements in the first text, splicing the elements into a context, and setting the context as a second text; vectorizing the first text and the second text respectively to obtain a first text vector and a second text vector;
the first stack module is specifically configured to add the first text vector to the position information of the first text vector to obtain a first position vector, and add the second text vector to the position information of the second text vector to obtain a second position vector; respectively inputting the first position vector and the second position vector into a neural network model for normalization processing to obtain a first normalized hidden vector and a second normalized hidden vector; extracting features of the first normalized hidden vector and the second normalized hidden vector to obtain a first feature vector and a second feature vector; inputting the first characteristic vector and the second characteristic vector into a neural network model for normalization processing to obtain a first normalized vector and a second normalized vector, inputting the first normalized vector and the second normalized vector into a self-attention neural network model for processing to obtain a first coding block vector and a second coding block vector, and circularly processing the first coding block vector and the second coding block vector to obtain a first cyclic vector and a second cyclic vector;
The output module is specifically set to preset a legal provision probability threshold; inputting the mixed stack vector into a full-connection layer of a convolutional neural network for linear processing to obtain a vector to be classified, and carrying out normalization processing on the vector to be classified to obtain a probability corresponding to each legal provision; and comparing the probability corresponding to each legal provision with the preset legal provision probability threshold value, and outputting all legal provision larger than the legal provision probability threshold value.
6. A computer device comprising a memory and a processor, the memory having stored therein computer readable instructions which, when executed by one or more of the processors, cause the one or more processors to perform the steps of the retrieval method of any of claims 1 to 4.
7. A storage medium readable by a processor, the storage medium storing computer instructions that, when executed by one or more processors, cause the one or more processors to perform the steps of the retrieval method of any one of claims 1 to 4.
CN201910991657.1A 2019-10-18 2019-10-18 Legal provision retrieval method and related equipment based on neural network hybrid model Active CN110928987B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910991657.1A CN110928987B (en) 2019-10-18 2019-10-18 Legal provision retrieval method and related equipment based on neural network hybrid model
PCT/CN2019/119314 WO2021072892A1 (en) 2019-10-18 2019-11-19 Legal provision search method based on neural network hybrid model, and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910991657.1A CN110928987B (en) 2019-10-18 2019-10-18 Legal provision retrieval method and related equipment based on neural network hybrid model

Publications (2)

Publication Number Publication Date
CN110928987A CN110928987A (en) 2020-03-27
CN110928987B true CN110928987B (en) 2023-07-25

Family

ID=69849151

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910991657.1A Active CN110928987B (en) 2019-10-18 2019-10-18 Legal provision retrieval method and related equipment based on neural network hybrid model

Country Status (2)

Country Link
CN (1) CN110928987B (en)
WO (1) WO2021072892A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113282709B (en) * 2021-06-01 2022-11-04 平安国际智慧城市科技股份有限公司 Text matching method, device and equipment and computer readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018113498A1 (en) * 2016-12-23 2018-06-28 北京国双科技有限公司 Method and apparatus for retrieving legal knowledge
CN110110045A (en) * 2019-04-26 2019-08-09 腾讯科技(深圳)有限公司 A kind of method, apparatus and storage medium for retrieving Similar Text
CN110276068A (en) * 2019-05-08 2019-09-24 清华大学 Law merit analysis method and device
CN110275936A (en) * 2019-05-09 2019-09-24 浙江工业大学 A kind of similar law case retrieving method based on from coding neural network

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160232441A1 (en) * 2015-02-05 2016-08-11 International Business Machines Corporation Scoring type coercion for question answering
CN107679224B (en) * 2017-10-20 2020-09-08 竹间智能科技(上海)有限公司 Intelligent question and answer method and system for unstructured text
CN109271524B (en) * 2018-08-02 2021-10-15 中国科学院计算技术研究所 Entity linking method in knowledge base question-answering system
CN109446416B (en) * 2018-09-26 2021-09-28 南京大学 Law recommendation method based on word vector model
CN109829055B (en) * 2019-02-22 2021-03-12 苏州大学 User law prediction method based on filter door mechanism
CN110188192B (en) * 2019-04-16 2023-01-31 西安电子科技大学 Multi-task network construction and multi-scale criminal name law enforcement combined prediction method
CN110334189B (en) * 2019-07-11 2023-04-18 河南大学 Microblog topic label determination method based on long-time and short-time and self-attention neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018113498A1 (en) * 2016-12-23 2018-06-28 北京国双科技有限公司 Method and apparatus for retrieving legal knowledge
CN110110045A (en) * 2019-04-26 2019-08-09 腾讯科技(深圳)有限公司 A kind of method, apparatus and storage medium for retrieving Similar Text
CN110276068A (en) * 2019-05-08 2019-09-24 清华大学 Law merit analysis method and device
CN110275936A (en) * 2019-05-09 2019-09-24 浙江工业大学 A kind of similar law case retrieving method based on from coding neural network

Also Published As

Publication number Publication date
WO2021072892A1 (en) 2021-04-22
CN110928987A (en) 2020-03-27

Similar Documents

Publication Publication Date Title
US11238093B2 (en) Video retrieval based on encoding temporal relationships among video frames
CN110825857B (en) Multi-round question and answer identification method and device, computer equipment and storage medium
CN109344399B (en) Text similarity calculation method based on stacked bidirectional lstm neural network
CN111291177B (en) Information processing method, device and computer storage medium
CN112819023A (en) Sample set acquisition method and device, computer equipment and storage medium
CN106033426A (en) Image retrieval method based on latent semantic minimum hash
CN111985228A (en) Text keyword extraction method and device, computer equipment and storage medium
CN111563161B (en) Statement identification method, statement identification device and intelligent equipment
CN110197213B (en) Image matching method, device and equipment based on neural network
CN115098556A (en) User demand matching method and device, electronic equipment and storage medium
CN117332788A (en) Semantic analysis method based on spoken English text
CN117217277A (en) Pre-training method, device, equipment, storage medium and product of language model
CN110928987B (en) Legal provision retrieval method and related equipment based on neural network hybrid model
CN111611796A (en) Hypernym determination method and device for hyponym, electronic device and storage medium
CN112632287B (en) Electric power knowledge graph construction method and device
CN115392357A (en) Classification model training and labeled data sample spot inspection method, medium and electronic equipment
CN115017267A (en) Unsupervised semantic retrieval method and device and computer readable storage medium
CN112307175B (en) Text processing method, text processing device, server and computer readable storage medium
CN113515620A (en) Method and device for sorting technical standard documents of power equipment, electronic equipment and medium
CN117390454A (en) Data labeling method and system based on multi-domain self-adaptive data closed loop
CN116257601A (en) Illegal word stock construction method and system based on deep learning
CN109446321A (en) Text classification method, text classification device, terminal and computer readable storage medium
CN118627506B (en) Method, device, equipment, medium and product for extracting answering text segment
CN113254587B (en) Search text recognition method and device, computer equipment and storage medium
CN113268566B (en) Question and answer pair quality evaluation method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant