Nothing Special   »   [go: up one dir, main page]

CN112751671A - Novel key exchange method based on tree parity machine - Google Patents

Novel key exchange method based on tree parity machine Download PDF

Info

Publication number
CN112751671A
CN112751671A CN202011620056.9A CN202011620056A CN112751671A CN 112751671 A CN112751671 A CN 112751671A CN 202011620056 A CN202011620056 A CN 202011620056A CN 112751671 A CN112751671 A CN 112751671A
Authority
CN
China
Prior art keywords
parties
key exchange
sliding window
tree
tree parity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011620056.9A
Other languages
Chinese (zh)
Other versions
CN112751671B (en
Inventor
李西明
王璇
郭玉彬
杜治国
陈志浩
温嘉勇
徐康
蔡河鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China Agricultural University
Original Assignee
South China Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China Agricultural University filed Critical South China Agricultural University
Priority to CN202011620056.9A priority Critical patent/CN112751671B/en
Publication of CN112751671A publication Critical patent/CN112751671A/en
Application granted granted Critical
Publication of CN112751671B publication Critical patent/CN112751671B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/08Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
    • H04L9/0861Generation of secret information including derivation or calculation of cryptographic keys or passwords
    • H04L9/0869Generation of secret information including derivation or calculation of cryptographic keys or passwords involving random numbers or seeds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/58Random or pseudo-random number generators
    • G06F7/582Pseudo-random number generators
    • G06F7/584Pseudo-random number generators using finite field arithmetic, e.g. using a linear feedback shift register
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/58Random or pseudo-random number generators
    • G06F7/582Pseudo-random number generators
    • G06F7/586Pseudo-random number generators using an integer algorithm, e.g. using linear congruential method
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/04Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
    • H04L63/0428Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
    • H04L63/0435Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload wherein the sending and receiving network entities apply symmetric encryption, i.e. same key used for encryption and decryption

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Error Detection And Correction (AREA)

Abstract

The invention discloses a novel key exchange method based on a tree parity machine, which comprises the following steps: s1, both communication parties generate a tree parity machine network model locally; s2, both communication parties generate sliding windows; s3, the two communication parties generate the same random vector; s4, inputting the random vector x into the network model; the model respectively outputs tau a and tau b; s5, judging whether tau a and tau b are equal; s6, updating the weight value of the model according to the updating rule of Hebbian, and saving the result true to the sliding window; s7, repeating the steps S3-S6 until the Hash values of the weight vectors of the two parties are completely the same, and obtaining the network weights Ka and Kb of the two parties. The invention realizes the self-synchronization property of the neural network to achieve the aim of key exchange. The method for updating the weight values of the models of the two parties by the updating rule after the dynamic learning rate is added can reduce the times required by network synchronization and accelerate the network synchronization speed.

Description

Novel key exchange method based on tree parity machine
Technical Field
The invention relates to the technical field of key exchange, in particular to a novel key exchange method based on a tree parity machine.
Background
In order to realize the secure communication of information, the information needs to be encrypted, and at present, there are two main encryption methods, namely symmetric encryption and asymmetric encryption. Asymmetric encryption is not suitable for use in encrypting large amounts of data because of its slow encryption speed. Symmetric encryption is widely used because of its high encryption speed and security. However, the use of symmetric encryption requires that the two communicating parties have a common key, which involves a key exchange. The conventional key exchange mainly includes Diffie-Hellman key exchange and tree parity machine-based key exchange. As shown in fig. 1, the conventional Diffie-Hellman key exchange can be divided into seven sub-processes, which are as follows:
the first step is as follows: alice sends two prime numbers n and g to Bob. Wherein Alice represents a communication opposite terminal A, and Bob represents a communication opposite terminal B. n must be a very large prime number and g may be a smaller number. n and g do not need to be kept secret and are not accessible to an eavesdropper.
The second step is that: alice generates a random number x ∈ (1, n-2). The random number x is a secret number known only to Alice, and does not have to be told to Bob, nor to an eavesdropper.
The third step: bob generates a random number y ∈ (1, n-2).
The fourth step: alice sets the power X mod n of g to g, i.e. X ═ gxmod n this number is sent to Bob.
The fifth step: bob sets the power B of g mod n, that is, Y ═ gymod n this number is sent to Alice.
And a sixth step: alice uses the number Y sent by Bob and calculates the number X timesMod n, i.e. k ═ Yxmod n is the shared key ka
The seventh step: bob uses the number X sent by Alice and calculates the y power mod n, i.e. k equals to Xymod n is the shared key kb
The above steps of Diffie-Hellman key exchange are algebraic theory based key exchange methods.
The tree parity machine in the key exchange model based on the tree parity machine is a multi-layer feedforward network comprising three layers. The input layer receives a multi-dimensional input vector, wherein each one-dimensional vector xijAre all random values taken from { -1,1 }. And the hidden layer node collects all values from the input layer, and after passing through the sgn activation function of the hidden layer node, the hidden layer node is multiplied by the output values of other hidden layer nodes to finally obtain the final output tau of the output layer.
FIG. 2 illustrates a TPM parity model comprising 3 hidden nodes and 12-dimensional input vectors, wherein each hidden node has 4 inputs xijE {1, -1 }. The connection between each dimension input and the hidden node has different weights Wij, where i represents the index of the hidden node and j represents the index of each input vector. Input h of implicit nodeiThe calculation was performed as follows:
Figure BDA0002873900670000021
description of the symbols
(symbol) Of significance
Ka,Kb Exchanged derived keys
Wij Weight of network
Xij Input vector
σi Output value of hidden layer
τ Final output of the network
L Value range of network weight
N Dimension of input vector
K Number of hidden layer nodes
hi Input of hidden layer nodes
In addition to the input value h of the hidden layeriAnd the final output result of the hidden layer is obtained after the operation of the activation function, and the expression of the activation function is as follows:
Figure BDA0002873900670000031
specifying a current output value hiZero, the output of the activation function σiIs 1. After calculating the output results of all the hidden layers, the output results of all the hidden layers are finally calculated according to the output results of the hidden layersCalculating to obtain the output result of the whole network:
Figure BDA0002873900670000032
because both sides respectively have a TPM network model with the same structure, the rule of synchronous training is to update the weight once when the final output results tau a and tau b of both sides are the same, and the update rule is as follows:
wAi(n+1)=wAi(n)+xτwBi(n+1)=wBi(n)+xτ
wherein WAi n+1A in (1) represents a correspondent A, i represents an ith hidden node, so WAi n+1The weight vector corresponding to the ith hidden node of a is represented. In addition, n +1 is the updated weight, and n represents the weight before updating. x denotes an input vector and τ denotes an output value.
The above update rules are Hebbian rules, and furthermore Anti-Hebbian rules:
wAi(n+1)=wAi(n)-xτwBi(n+1)=wBi(n)-xτ
if the results obtained by tau a and tau b are different, the learning is directly skipped. And meanwhile, continuously generating a random vector x and repeating the steps until the networks of the two parties are synchronized.
Through the steps of updating the weights, the weights of the two networks are completely consistent. The weight value can be used as the key obtained by exchanging.
However, the key exchange model based on algebraic theory cannot generate large prime numbers quickly due to the current. Where a large number of mod operations and multiplication operations on large integers are involved. Such operations require very high cpu performance and occupy a large amount of memory. Therefore, a very large prime number (a value of thousands of bits) needs to be calculated by using a key exchange algorithm similar to Diffie-Hellman, which not only needs to rely on a special large integer function library, has a certain requirement on the memory size of the device, but also has many complex operations such as multiplication and the like, has a certain requirement on the operational capability of the device, and finally causes that the key exchange process is slow and difficult to be realized in the small embedded device. All key exchange methods using similar public key technology have the drawbacks described above. The key exchange model based on the tree parity machine has low information amount learned by the model in the process of updating each time, so that the efficiency of a new rule is not high enough; when the parameters K, L and N of the tree parity machine are large, the number of times the network needs to achieve synchronization will increase greatly, and even synchronization cannot be completed.
In summary, how to reduce the number of times of synchronization is a key issue. In addition, the transmission process of the input vector in the synchronization process also affects the synchronization efficiency to a certain extent. Therefore, there is a need in the industry to develop a new key exchange method that can improve the synchronization efficiency of key exchange and reduce the requirements of the key exchange protocol on hardware devices.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a novel key exchange method based on a tree parity machine, which improves the synchronization efficiency of key exchange.
The purpose of the invention is realized by the following technical scheme:
a novel key exchange method based on a tree parity machine comprises the following steps:
s1, both communication parties adopt the same network structure to locally generate a tree parity machine network model;
s2, both communication parties generate sliding windows;
s3, the two communication parties generate the same random vector x;
s4, inputting the random vector x into the tree parity machine network model; the tree parity machine network model output tau a of the communication party A and the tree parity machine network model output tau B of the communication party B;
s5, judging whether tau a and tau b are equal; if yes, go to step S6;
s6, updating the weight value of the model according to the updating rule of Hebbian, and saving the result true to the sliding window;
s7, repeating steps S3-S6 until the Hash values of the weight vectors of the two parties are completely the same, obtaining network weights Ka and Kb of the two parties, and Ka being Kb;
s8, Base64 coding the weight Ka and Kb as the key of the symmetric encryption algorithm AES, realizing the exchange of the keys of both communication parties.
Preferably, step S3 includes: generating the same random vector x by using a local random number generator; wherein the correspondent a and the correspondent B include the same random number generator.
Preferably, the probability of the error learning of the tree parity machine network model is eiihi),
Figure BDA0002873900670000051
Where ρ isiE (0,1) represents the product of the weight vectors of both parties, eiihi) With hiStrictly monotonically decreasing, hiAs input to the hidden layer node, when hiIs taken as
Figure BDA0002873900670000052
The tree parity machine network models of the two parties are synchronized to the fastest speed.
Preferably, step S3 includes: generating a random vector x by adopting a depth-first search traversal algorithm, and sending the random vector x to the other party to ensure that the random vectors x of the two communication parties are the same, and the random vector x enables hiIs taken as
Figure BDA0002873900670000053
Preferably, step S6 includes: and putting the result true into the sliding window, deleting the first element of the sliding window, keeping the number of the elements in the sliding window unchanged, re-determining the learning rate according to the number of true in the sliding window, and finally updating the weights of the two parties.
Preferably, the learning rate is re-determined according to the number of true in the sliding window, and finally updating the weight values of both parties includes: recalculating the ratio of true of the current sliding window, dynamically adjusting the learning rate according to the ratio of true, and updating the weight values of the tree parity machine network models of the two parties by an updating rule after the dynamic learning rate is added; the update rule after adding the dynamic learning rate is as follows:
wAi(n+1)=wAi(n)+xλτwBi(n+1)=wBi(n)+xλτ
wherein, the lambda is a dynamic learning rate and belongs to [1,2,3.. L/2 ].
Preferably, in step S5, if τ a and τ b are not equal, the result fsle is saved in the sliding window, and the first element of the sliding window is deleted, so as to keep the number of elements in the sliding window unchanged.
Preferably, the random number generator is an LFSR or an LCG.
Preferably, step S1 includes both parties generating the tree parity machine network model locally according to pre-negotiated parameters K, L and N.
Compared with the prior art, the invention has the following advantages:
1. the tree odd-even machine network model of the invention reduces the calculation amount of key exchange, thus the key exchange speed is faster; the key length may vary as the input vector dimension N varies.
2. The two communication parties of the invention adopt the same network structure to generate the tree-type odd-even machine network model locally, and the self-synchronization property of the neural network is realized to achieve the purpose of key exchange. In the process of updating the weight of the model, the weight of the model is updated according to the update rule of Hebbian, the update step length needs to be multiplied by the dynamic learning rate, and the times required by network synchronization can be reduced and the synchronization speed of the network can be accelerated by the method of updating the weight of the tree-type odd-even machine network models of both sides according to the update rule after the dynamic learning rate is added.
3. The same random vector x is generated by using a local random number generator, so that the interactive information quantity of both communication parties is reduced, the key exchange efficiency is further improved, and the requirement of a key exchange protocol on hardware equipment is reduced.
4. Generating a highly efficient random vector x using a depth-first search traversal algorithm, the random vector x being such that hiThe value of (2) meets the requirement that the synchronization of the tree parity machine network models of both sides reaches the fastest speed, and the high-efficiency input vector can increase the information amount learned by each synchronous model, thereby further reducing the interaction times of the models.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
fig. 1 is a diagram of a conventional Diffie-Hellman key exchange process.
FIG. 2 is a diagram of a tree parity model.
Fig. 3 is a schematic flow chart of the novel key exchange method based on the tree parity machine of the invention.
Fig. 4 is a schematic block diagram of an LFSR pseudo-random sequence generator model.
Fig. 5 is a schematic block diagram of the four-bit LFSR model.
Fig. 6 is a graph of interaction times as a function of N, K and L.
Fig. 7 is a schematic view of a sliding window of the present invention.
FIG. 8 is a graph of the relationship between the learning rate and the sliding window accuracy of the present invention.
Detailed Description
The invention is further illustrated by the following figures and examples.
Referring to fig. 3, a novel key exchange method based on a tree parity machine includes:
s1, both communication parties adopt the same network structure to locally generate a tree parity machine network model; the introduction of the tree parity machine network model is described in the background section. Step S1 is specifically that the two communication parties locally generate the tree parity machine network model according to the negotiated parameters K, L and N. The parameters K, L and N herein can uniquely determine the network structure, and K, L and N can determine the complexity of the network. Therefore, the two parties negotiate the K, L and N, which means that the two parties adopt the same network structure)
S2, both communication parties generate sliding windows; the sliding window is used for storing the condition that the outputs of the two parties are equal in the following iteration process, and the learning rate of the network is determined according to the equal number in the window. Step S1 and step S2 are preparation phases for synchronization. The following steps are iterative processes.
S3, the two communication parties generate the same random vector x; in this embodiment, since the input vectors of both parties are the same vector and do not need to be secured (accessible to an eavesdropper), the LFSR or LCG can be used to generate a local pseudo-random sequence. Specifically, step S3 includes: generating the same random vector x by using a local random number generator; wherein the correspondent a and the correspondent B include the same random number generator. The random number generator is LFSR or LCG. The rules for these two random number generators are: as long as the initial state is the same, the next generated sequences will be identical. Both parties therefore need to have the same initial state in order to be able to obtain the same input vector locally.
The LFSR can be used to generate a pseudo-random sequence, which consists of n stages of flip-flops and a number of xor gates, as shown in fig. 4. In addition, the circuit model can also be directly realized by using software. Wherein Gn is a feedback coefficient, the value of Gn can only be 0 or 1, when 0 is taken, the feedback circuit does not exist, and when 1 is taken, the feedback circuit exists. The feedback coefficients here determine the different algorithms for generating the random numbers.
The initial value of the LFSR is called the seed of the pseudo-random sequence, such as the LFSR model shown in FIG. 5, which corresponds to a feedback polynomial of
Figure BDA0002873900670000081
Wherein
Figure BDA0002873900670000082
The operation means (D)1+D2)mod 2。
Assuming that the initial values of D1, D2, D3, D4 are each 1000, the following highly random sequences will result.
100011001110111101111011010110101101011000111001010000100001....
An lcg (linear congruential generator) linear congruence algorithm is an algorithm for generating random numbers. The recurrence formula is: xn+1=(aXn+ c) mod m, where a is the number of shifts, c is the offset and mod is the modulo operation. When the LCG algorithm is used for generating the pseudorandom sequence, the required pseudorandom sequence can be obtained by setting m to 2.
After the tree parity machine network uses a local random number generator (random number generator), the common random vector of the two networks can be directly generated locally without using the network for transmission. This will reduce the amount of communication data required for the synchronization process.
By using a perceptron prediction error formula, the probability of the error learning of the tree parity machine network model is known to be eiihi),
Figure BDA0002873900670000083
Where ρ isiE (0,1) represents the product of the weight vectors of both parties, eiihi) With hiStrictly monotonically decreasing, hiAs input to the hidden layer node, eiihi) With hiStrictly monotonically decreasing, if a high efficiency input vector needs to be generated, h needs to be madeiAnd the method is in a proper value range, so that the probability of error learning can be reduced, and the learning process is accelerated.
When h is generatediIs taken as
Figure BDA0002873900670000091
The tree parity machine network models of the two parties are synchronized to the fastest speed. To make it possible to
Figure BDA0002873900670000092
A suitable input vector needs to be searched out. As another possible embodiment, step S3 includes: generating a random vector x by adopting a depth-first search traversal algorithm and sending the random vector x to a serverOn the other hand, ensuring the random vectors x of both communication parties to be the same, and adding random noise into the random vectors x to ensure the randomness of the input vector x, wherein the random vector x enables hiIs taken as
Figure BDA0002873900670000093
Wherein the random vectors x and hiThe logical relationship is that the multiplication of the input vector x and the weight vector is hi, and then the summation is carried out, and the formula is as follows:
Figure BDA0002873900670000094
with the parameters of N300, K6 and L3, document [1] requires about 500 times to be able to achieve synchronization. With the use of an efficient input vector generator, only about 230 times are required to achieve full synchronization, with over a 50% reduction in the number of times required for synchronization. Therefore, the improvement mode provided by the invention can improve the network synchronization performance.
S4, inputting the random vector x into the tree parity machine network model; the tree parity machine network model output tau a of the communication party A and the tree parity machine network model output tau B of the communication party B;
s5, judging whether tau a and tau b are equal; if yes, go to step S6; and if the tau a and the tau b are not equal, storing the result fsle into the sliding window, deleting the first element of the sliding window, and keeping the number of the elements in the sliding window unchanged.
S6, updating the weight value of the model according to the updating rule of Hebbian, and saving the result true to the sliding window;
the learning rate of documents [2-5] is always equal to 1, i.e. each update can only increase or decrease the weight in the model by 1. This approach must allow the model to eventually synchronize without falling into a state of being out of sync. But this way of updating makes the synchronization speed of the model slow. As shown in fig. 6, as the number of times required for synchronization increases with K, L, N, this will reduce the efficiency of key exchange. The sliding window protocol introduced by the present invention will dynamically change the learning rate to accelerate the synchronization process as shown in fig. 7, and the specific process includes: and (3) putting the result true into a sliding window, deleting the first element of the sliding window, keeping the number of the elements in the sliding window unchanged (fixed as 50 elements), re-determining the learning rate according to the number of true in the sliding window, and finally updating the weights of the two parties. Further, the learning rate is re-determined according to the number of true in the sliding window, and finally updating the weight values of both parties includes: recalculating the ratio of true of the current sliding window, dynamically adjusting the learning rate according to the ratio of true, and updating the weight values of the tree parity machine network models of the two parties by an updating rule after the dynamic learning rate is added;
the update rule after adding the dynamic learning rate is as follows:
wAi(n+1)=wAi(n)+xλτwBi(n+1)=wBi(n)+xλτ
wherein, the lambda is a dynamic learning rate and belongs to [1,2,3.. L/2 ].
In the present invention, the learning rate λ is associated with the accuracy of the sliding window, for example, when L is 8, the learning rate λ is dynamically changed in the manner of fig. 8.
Figure BDA0002873900670000101
Experimental results show that the number of interactions required for synchronization will be greatly reduced after the learning rate λ is used. The average number of synchronizations for 500 iterations is 316 with the network parameters N100, K3, and L3[6]The number of synchronizations drops to about 210 times after using the sliding window based protocol and the security of the key exchange can still be guaranteed.
S7, repeating steps S3-S6 until the Hash values of the weight vectors of the two parties are completely the same, obtaining network weights Ka and Kb of the two parties, and Ka being Kb;
s8, Base64 coding the weight Ka and Kb as the key of the symmetric encryption algorithm AES, realizing the exchange of the keys of both communication parties. The plaintext of the weight value cannot be directly sent to the opposite side, and the plaintext of the weight value is directly sent to the opposite side and can be stolen by an adversary to obtain a final secret key.
The key point of the invention is that the self-synchronization phenomenon of the neural network is utilized, the key negotiation is carried out in a mutual learning mode, and the safe transmission of the key is realized without using a complex algebraic theory as the traditional mode. In the prior art, a public key encryption system is used for carrying out safe exchange of keys, tools such as a large integer library, a prime number judgment library and the like are required, and hardware equipment is required to have strong computing capability and large internal memory. All calculations of the invention are simple operations, no large memory consumption is needed, the key exchange can be achieved only by little interaction, and most embedded devices can operate quickly and obtain results.
Reference to the literature
[1]Wolfgang Kinzel I K.Interacting Neural Networks and Cryptography[J],2002.
[2]Javurek M,Turcanik M.Synchronization Verification Improvement of Two Tree Parity Machines Using Polynomial Function[C].2018 New Trends in Signal Processing(NTSP),2018.
[3]Allam A M,Abbas H M.On the Improvement of Neural Cryptography Using Erroneous Transmitted Information With Error Prediction[J].IEEE Transactions on Neural Networks,2010,21(12):1915-1924.
[4]Tao D,Tingwen H.Neural Cryptography Based on Complex-Valued Neural Network[J].IEEE transactions on neural networks and learning systems,2019.
[5] Liuzhilu, Liao Xiao Feng, Beam Yi Feng an improved tree parity based neural network synchronization scheme [ J ] computer and modernization, 2014(5) 51-55.
The above-mentioned embodiments are preferred embodiments of the present invention, and the present invention is not limited thereto, and any other modifications or equivalent substitutions that do not depart from the technical spirit of the present invention are included in the scope of the present invention.

Claims (9)

1. A novel key exchange method based on a tree parity machine is characterized by comprising the following steps:
s1, both communication parties adopt the same network structure to locally generate a tree parity machine network model;
s2, both communication parties generate sliding windows;
s3, the two communication parties generate the same random vector x;
s4, inputting the random vector x into the tree parity machine network model; the tree parity machine network model output tau a of the communication party A and the tree parity machine network model output tau B of the communication party B;
s5, judging whether tau a and tau b are equal; if yes, go to step S6;
s6, updating the weight value of the model according to the updating rule of Hebbian, and saving the result true to the sliding window;
s7, repeating steps S3-S6 until the Hash values of the weight vectors of the two parties are completely the same, obtaining network weights Ka and Kb of the two parties, and Ka being Kb;
s8, Base64 coding the weight Ka and Kb as the key of the symmetric encryption algorithm AES, realizing the exchange of the keys of both communication parties.
2. The novel tree parity based key exchange method according to claim 1, wherein the step S3 includes: generating the same random vector x by using a local random number generator; wherein the correspondent a and the correspondent B include the same random number generator.
3. The novel key exchange method based on tree parity machine as claimed in claim 1, wherein the probability of the network model of the tree parity machine for error learning is eiihi),
Figure FDA0002873900660000011
Where ρ isiE (0,1) represents the product of the weight vectors of both parties, eiihi) With hiStrictly monotonically decreasing in the direction of the peak,hias input to the hidden layer node, when hiIs taken as
Figure FDA0002873900660000012
The tree parity machine network models of the two parties are synchronized to the fastest speed.
4. The novel tree parity based key exchange method according to claim 3, wherein the step S3 includes:
generating a random vector x by adopting a depth-first search traversal algorithm, and sending the random vector x to the other party to ensure that the random vectors x of the two communication parties are the same, and the random vector x enables hiIs taken as
Figure FDA0002873900660000021
5. The novel tree parity based key exchange method according to claim 1, wherein the step S6 includes:
and putting the result true into the sliding window, deleting the first element of the sliding window, keeping the number of the elements in the sliding window unchanged, re-determining the learning rate according to the number of true in the sliding window, and finally updating the weights of the two parties.
6. The novel tree parity machine-based key exchange method according to claim 5, wherein the learning rate is re-determined according to the number of true in the sliding window, and finally updating the weights of the two parties comprises:
recalculating the ratio of true of the current sliding window, dynamically adjusting the learning rate according to the ratio of true, and updating the weight values of the tree parity machine network models of the two parties by an updating rule after the dynamic learning rate is added;
the update rule after adding the dynamic learning rate is as follows:
wAi(n+1)=wAi(n)+xλτ wBi(n+1)=wBi(n)+xλτ
wherein, the lambda is a dynamic learning rate and belongs to [1,2,3.. L/2 ].
7. The method of claim 1, wherein in step S5, if τ a and τ b are not equal, the result fsle is saved in the sliding window, and the first element of the sliding window is deleted, keeping the number of elements in the sliding window unchanged.
8. The novel tree parity based key exchange method of claim 2, wherein the random number generator is LFSR or LCG.
9. The novel tree parity based key exchange method according to claim 1, wherein step S1 includes the two communicating parties locally generating the tree parity network model according to the pre-negotiated parameters K, L and N.
CN202011620056.9A 2020-12-30 2020-12-30 Novel key exchange method based on tree parity machine Active CN112751671B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011620056.9A CN112751671B (en) 2020-12-30 2020-12-30 Novel key exchange method based on tree parity machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011620056.9A CN112751671B (en) 2020-12-30 2020-12-30 Novel key exchange method based on tree parity machine

Publications (2)

Publication Number Publication Date
CN112751671A true CN112751671A (en) 2021-05-04
CN112751671B CN112751671B (en) 2022-07-05

Family

ID=75650202

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011620056.9A Active CN112751671B (en) 2020-12-30 2020-12-30 Novel key exchange method based on tree parity machine

Country Status (1)

Country Link
CN (1) CN112751671B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101141248A (en) * 2007-09-30 2008-03-12 浙江工业大学 Neural network weight synchronization based lightweight key negotiation method
CN101977112A (en) * 2010-11-04 2011-02-16 厦门大学 Public key cipher encrypting and decrypting method based on neural network chaotic attractor
CN102263636A (en) * 2011-05-24 2011-11-30 浙江工业大学 Stream cipher key control method for fusing neural network with chaotic mappings
EP3073668A1 (en) * 2015-03-25 2016-09-28 Juniper Networks, Inc. Apparatus and method for authenticating network devices
CN109347633A (en) * 2018-10-29 2019-02-15 华南农业大学 Fuzzy keys communication system and confrontation network system based on deep learning
CN110245746A (en) * 2019-06-19 2019-09-17 广州供电局有限公司 A kind of improved method of BP neural network learning rate
CN110381509A (en) * 2019-06-04 2019-10-25 北京邮电大学深圳研究院 A kind of joint qualification method and server suitable for Dynamic link library scene
CN110929840A (en) * 2018-09-20 2020-03-27 维萨国际服务协会 Continuous learning neural network system using rolling window

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101141248A (en) * 2007-09-30 2008-03-12 浙江工业大学 Neural network weight synchronization based lightweight key negotiation method
CN101977112A (en) * 2010-11-04 2011-02-16 厦门大学 Public key cipher encrypting and decrypting method based on neural network chaotic attractor
CN102263636A (en) * 2011-05-24 2011-11-30 浙江工业大学 Stream cipher key control method for fusing neural network with chaotic mappings
EP3073668A1 (en) * 2015-03-25 2016-09-28 Juniper Networks, Inc. Apparatus and method for authenticating network devices
CN110929840A (en) * 2018-09-20 2020-03-27 维萨国际服务协会 Continuous learning neural network system using rolling window
CN109347633A (en) * 2018-10-29 2019-02-15 华南农业大学 Fuzzy keys communication system and confrontation network system based on deep learning
CN110381509A (en) * 2019-06-04 2019-10-25 北京邮电大学深圳研究院 A kind of joint qualification method and server suitable for Dynamic link library scene
CN110245746A (en) * 2019-06-19 2019-09-17 广州供电局有限公司 A kind of improved method of BP neural network learning rate

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
严杜鹃: "《面向传感器网络的轻量级密钥更新机制研究与实现》", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *
张力生等: "《一种基于队列机制的神经密码学的新学习规则》", 《计算机应用研究》 *

Also Published As

Publication number Publication date
CN112751671B (en) 2022-07-05

Similar Documents

Publication Publication Date Title
CN110363030B (en) Method and processing device for performing a trellis-based cryptographic operation
Zhang et al. GELU-Net: A Globally Encrypted, Locally Unencrypted Deep Neural Network for Privacy-Preserved Learning.
Jeong et al. Neural Cryptography Based on Generalized Tree Parity Machine for Real‐Life Systems
US9948460B2 (en) Multivariate cryptography based on clipped hopfield neural network
Sarkar et al. Multilayer neural network synchronized secured session key based encryption in wireless communication
JP4575283B2 (en) ENCRYPTION DEVICE, DECRYPTION DEVICE, PROGRAM, AND METHOD
JP2020508021A (en) Key exchange device and method
CN112769542B (en) Multiplication triple generation method, device, equipment and medium based on elliptic curve
Ding et al. The Nested Subset Differential Attack: A Practical Direct Attack Against LUOV Which Forges a Signature Within 210 Minutes
Hart et al. A Practical Cryptanalysis of WalnutDSA^ TM TM
Liu et al. Constructing strong S-Box by 2D chaotic map with application to irreversible parallel key expansion
Korayem et al. Color image encryption using a sine variation of the logistic map for s-box and key generation
CN104618098B (en) Cryptography building method and system that a kind of set member's relation judges
CN112751671B (en) Novel key exchange method based on tree parity machine
Gorbenko et al. Methods of building general parameters and keys for NTRU Prime Ukraine of 5 th–7 th levels of stability. Product form
CN117768180A (en) Privacy set intersection calculating method based on symmetric key pseudo-random function
Ma et al. Fast correlation attacks on K2 stream cipher
CN116170142A (en) Distributed collaborative decryption method, device and storage medium
CN115208548A (en) Apparatus for processing non-polynomial operation on homomorphic encrypted message and method thereof
JP4598269B2 (en) Fast finite field operations on elliptic curves
KR20230003954A (en) Ciphertext processing method for zero-knowledge proof and apparatus thereof
KR100304368B1 (en) Generation method of de bruijn sequences
Alahmadi et al. Enhancing arbiter PUF against modeling attacks using constant weight encoding
Ali Linearisation attacks on fcsr-based stream ciphers
Kumar et al. A secure structure for hiding information in a cryptosystem based on machine-learning techniques and content-based optimization using portfolio selection data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant