CN109936423B - Training method, device and recognition method of fountain code recognition model - Google Patents
Training method, device and recognition method of fountain code recognition model Download PDFInfo
- Publication number
- CN109936423B CN109936423B CN201910183728.5A CN201910183728A CN109936423B CN 109936423 B CN109936423 B CN 109936423B CN 201910183728 A CN201910183728 A CN 201910183728A CN 109936423 B CN109936423 B CN 109936423B
- Authority
- CN
- China
- Prior art keywords
- model
- fountain code
- data
- fountain
- sample set
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012549 training Methods 0.000 title claims abstract description 81
- 238000000034 method Methods 0.000 title claims abstract description 57
- 238000003062 neural network model Methods 0.000 claims description 22
- 238000012360 testing method Methods 0.000 claims description 12
- 238000005070 sampling Methods 0.000 claims description 6
- 238000010276 construction Methods 0.000 claims description 3
- 230000000977 initiatory effect Effects 0.000 claims description 3
- 230000005540 biological transmission Effects 0.000 abstract description 3
- 238000013527 convolutional neural network Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 9
- 230000008569 process Effects 0.000 description 9
- 230000006870 function Effects 0.000 description 8
- 238000004590 computer program Methods 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 238000013528 artificial neural network Methods 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 125000004122 cyclic group Chemical group 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000005314 correlation function Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
Images
Landscapes
- Character Discrimination (AREA)
Abstract
The embodiment of the invention provides a training method, a device and a recognition method of a fountain code recognition model, wherein the training method comprises the following steps: obtaining a fountain code sample set, wherein the fountain code sample set comprises samples coded by fountain codes and samples coded by fountain codes; inputting the fountain code sample set into a preset first model for training to obtain a first target model; modulating the fountain code sample set to obtain a modulation mode sample set; inputting the modulation mode sample set into a preset second model for training to obtain a second target model; and constructing the first target model and the second target model into a fountain code identification model. The invention solves the problem that fountain codes are difficult to automatically identify in uncooperative transmission at present.
Description
Technical Field
The invention relates to the technical field of communication, in particular to a training method, a device and an identification method of a fountain code identification model.
Background
In the beginning of the 21 st century, Software Defined Radio (SDR) was born. The software radio replaces a special Digital circuit with a DSP (Digital Signal Processing) device with strong programmability, so that the hardware structure and the function of the system are relatively independent. Therefore, different communication functions can be realized through software based on a relatively universal hardware platform, programming control is carried out on working frequency, system bandwidth, modulation mode, information source coding and the like, system flexibility is greatly enhanced, and urgent needs are provided for uncooperative reception.
However, the currently adopted machine learning methods are all collaborative reception, and there are two general schemes:
one is as follows: after IQ (In-phase/Quadrature) data is acquired, feature extraction needs to be performed on the IQ data, which mainly includes time domain feature parameters or transform domain feature parameters. The time domain features include instantaneous amplitude, instantaneous frequency and instantaneous phase; transform domain features include power spectra, spectral correlation functions, time-frequency distributions, and other statistical parameters. The prior art has low recognition precision for modulation modes, especially low recognition precision for QAM16 and QAM 64; the extraction of the features requires high professional knowledge in the communication field, and some information of the original data is indirectly lost while the features are extracted manually.
The second step is as follows: the convolutional neural network is used for automatically extracting the characteristics of the modulation mode, the convolutional neural networks are directly superposed, the prediction precision is reduced after the number of layers is deepened, and the number of layers of a network structure is usually less than 8.
The recognition models used in the above two types of prior art have the following defects: fountain code communication is based on cooperative reception, and therefore, only a modulation scheme is identified, and it is difficult to identify a fountain code in the case of uncooperative transmission.
Disclosure of Invention
In view of this, embodiments of the present invention provide a training method, an apparatus, and an identification method for a fountain code identification model, so as to solve the problem that it is difficult to automatically identify a fountain code in non-cooperative transmission at present.
In a first aspect, the present application provides the following technical solutions through an embodiment:
a training method of a fountain code recognition model comprises the following steps:
obtaining a fountain code sample set, wherein the fountain code sample set comprises samples coded by fountain codes and samples coded by fountain codes;
inputting the fountain code sample set into a preset first model for training to obtain a first target model, wherein the first model is a neural network model;
modulating the fountain code sample set to obtain a modulation mode sample set;
inputting the modulation mode sample set into a preset second model for training to obtain a second target model, wherein the second model is a neural network model;
constructing the first target model and the second target model into a fountain code identification model; the second target model is used for identifying a modulation mode of IQ data; the first target model is used for identifying fountain codes in original coded data, and the original coded data are data obtained by demodulating the IQ data in a demodulation mode identified by the second target model.
Preferably, the obtaining of the fountain code sample set includes:
acquiring a fountain code data set;
marking a first mark on a fountain code data set by adopting a fountain code coded sample;
marking a second mark for a sample which is not coded by the fountain code in the fountain code data set;
and using the fountain code data set marked with the first mark and the second mark as a fountain code sample set.
Preferably, the modulating the fountain code sample set to obtain a modulation mode sample set includes:
modulating each data in the fountain code sample set according to a plurality of modulation modes and a plurality of signal-to-noise ratios to obtain a modulation mode data set;
and adding a corresponding modulation mode label to each data in the modulation mode data set to obtain the modulation mode sample set.
Preferably, the first model and the second model are both depth residual error network models.
Preferably, the inputting the fountain code sample set into a preset first model for training to obtain a first target model includes:
inputting training samples in the fountain code sample set into the first model for training;
determining whether the accuracy of the trained first model meets a preset value or not according to the test samples in the fountain code sample set;
if not, adjusting the order of the convolution kernel of the first model and the number of the initiation layers, and continuing to input training samples in the fountain code sample set into the adjusted first model for training;
if yes, the first trained model is used as the first target model.
Preferably, the samples in the set of fountain code samples encoded using fountain codes are 50%; the set of fountain code samples has 50% of the samples that are not encoded using fountain codes.
In a second aspect, based on the same inventive concept, the present application provides the following technical solutions through an embodiment:
a training device of a fountain code recognition model comprises:
the fountain code sampling set acquisition module is used for acquiring a fountain code sampling set, and the fountain code sampling set comprises samples coded by fountain codes and samples not coded by the fountain codes;
the first training module is used for inputting the fountain code sample set into a preset first model for training to obtain a first target model, wherein the first model is a neural network model;
a modulation mode sample set obtaining module, configured to modulate the fountain code sample set to obtain a modulation mode sample set;
the second training module is used for inputting the modulation mode sample set into a preset second model for training to obtain a second target model, wherein the second model is a neural network model;
the identification model construction module is used for constructing the first target model and the second target model into a fountain code identification model; the second target model is used for identifying a modulation mode of IQ data; the first target model is used for identifying fountain codes in original coded data, and the original coded data are data obtained by demodulating the IQ data in a demodulation mode identified by the second target model.
Preferably, the fountain code sample set obtaining module is further configured to:
acquiring a fountain code data set;
marking a first mark on a fountain code data set by adopting a fountain code coded sample;
marking a second mark for a sample which is not coded by the fountain code in the fountain code data set;
and using the fountain code data set marked with the first mark and the second mark as a fountain code sample set.
Preferably, the modulation scheme sample set obtaining module is further configured to:
modulating each data in the fountain code sample set according to a plurality of modulation modes and a plurality of signal-to-noise ratios to obtain a modulation mode data set;
and adding a corresponding modulation mode label to each data in the modulation mode data set to obtain the modulation mode sample set.
In a third aspect, based on the same inventive concept, the present application provides the following technical solutions through an embodiment:
a fountain code identification method for non-cooperative reception, wherein the fountain code identification model in the first aspect is applied to the fountain code identification method, and the fountain code identification method includes:
receiving IQ data;
inputting the IQ data into the second target model for identification, and if the IQ data is successfully identified, obtaining a modulation mode corresponding to the IQ data;
demodulating the IQ data according to the modulation mode to obtain original coded data;
and inputting the original coding data into the first target model for identification, and if the original coding data is identified to be fountain codes, decoding to obtain original data corresponding to the IQ data.
One or more technical solutions provided in the embodiments of the present application have at least the following technical effects or advantages:
the invention provides a training method of a fountain code recognition model, which is characterized in that a first target model is obtained by training a first model by using a fountain code sample set; and then modulating the fountain code sample set to obtain a modulation mode sample set, thereby ensuring that the fountain code sample set and the modulation mode sample set come from the same data source. Therefore, after the modulation mode of the IQ data is identified by the second target model obtained through the training of the modulation mode sample set, the fountain code can be further identified for the original encoded data obtained after the IQ data is demodulated by the first target model, that is, the fountain code identification model constructed by the first target model and the second target model can automatically identify the fountain code in the IQ data, and the identification accuracy is improved. Feature extraction is not needed to be carried out on IQ data before fountain code identification is carried out, and the dependency of an automatic modulation mode and fountain code identification on the knowledge in the communication professional field is reduced; while the identified IQ data may be non-cooperatively accepted data.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a flowchart of a training method of a fountain code recognition model according to a first embodiment of the present invention;
fig. 2 is a block diagram illustrating an identification method of a fountain code received in a non-cooperative manner according to a second embodiment of the present invention;
fig. 3 is a functional block diagram of a training apparatus for a fountain code recognition model according to a third embodiment of the present invention;
fig. 4 is a block diagram of a training apparatus of an exemplary fountain code recognition model according to a fourth embodiment of the present invention;
fig. 5 is a block diagram of a computer-readable storage medium according to a fifth embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present invention, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
First embodiment
Referring to fig. 1, a training method of a fountain code recognition model is provided in this embodiment. Specifically, the method comprises the following steps:
step S10: obtaining a fountain code sample set, wherein the fountain code sample set comprises samples coded by fountain codes and samples coded by fountain codes;
step S20: inputting the fountain code sample set into a preset first model for training to obtain a first target model, wherein the first model is a neural network model;
step S30: modulating the fountain code sample set to obtain a modulation mode sample set;
step S40: inputting the modulation mode sample set into a preset second model for training to obtain a second target model, wherein the second model is a neural network model;
step S50: constructing the first target model and the second target model into a fountain code identification model; the second target model is used for identifying a modulation mode of IQ data; the first target model is used for identifying fountain codes in original coded data, and the original coded data are data obtained by demodulating the IQ data in a demodulation mode identified by the second target model.
In step S10, the set of fountain code samples may be divided into two parts, a first part serving as a training sample for model training and a second part serving as a test sample for testing. For example, the first portion accounts for 60%, 65%, 75%, 80%, etc. of the set of fountain code samples, and the corresponding second portion accounts for 40%, 35%, 25%, 20% of the set of fountain code samples.
Samples coded by fountain codes and samples coded by non-fountain codes are contained in the two part samples, so that the characteristics of the samples coded by the fountain codes and the characteristics of the samples coded by the non-fountain codes are guaranteed to be learned in a model. Furthermore, 50% of the fountain code sample set can be samples coded by using the fountain codes, and the other 50% of the fountain code sample set can be samples coded by not using the fountain codes; when the training samples and the test samples are divided, the samples coded by the fountain codes and the samples coded by the fountain codes in the training samples and the test samples can respectively account for 50%. By dividing the fountain code sample set in such a dividing mode, the fountain code sample set can be guaranteed to have similar positive and negative samples during model training, and the accuracy of the first trained model in fountain code identification is improved.
Samples coded using fountain codes and samples coded not using fountain codes need to be marked for distinction, and in step S10, the obtaining steps are as follows:
1. acquiring a fountain code data set; including samples encoded using a fountain code and samples encoded without a fountain code.
2. Marking a first mark on a fountain code data set by adopting a fountain code coded sample; the first mark may be a specific character or ID, etc., for example, the first mark may be 1.
3. Marking a second mark for a sample which is not coded by the fountain code in the fountain code data set; the first flag may be a specific character or ID, etc., for example, the first flag may be 0.
4. And taking the fountain code data set marked with the first mark and the second mark as a fountain code sample set.
Step S20: and inputting the fountain code sample set into a preset first model for training to obtain a first target model, wherein the first model is a neural network model.
In step S20, the neural network model may be: convolutional neural network models, cyclic neural networks, deep neural networks, and the like. Preferably, the first model is a deep residual network model (RESNET) in a convolutional neural network. An identity shortcut connection (identity shortcut connection) is introduced into the RESNET network structure, and the key difference of the network structure diagram is xi+1In addition to xiComponent (b): x is the number ofi+1=F(xi)→xi+1=F(xi)+xi. Finally, the depth N of the neural network model is more than or equal to 14 (the number of layers of the CNN network model is only less than 8), and the identification precision is improved to meet the engineering requirements.
For example:
when the input is x, the output is F (x);
if the output of layer 1 is x1The output of layer 2 is x2And so on, wherein xiIs the output of layer i;
for CNN networks without an identical shortcut connection, the input to layer i +1 is the output of layer i, so xi+1=F(xi);
For a RESNET with constant fast connection,the input at level i +1 is the output at level i, but the output at level i +1 also overlaps the output at level i (i.e. the said constant shortcut connection), i.e. xi+1=F(xi)+xi。
In this embodiment, the first model is taken as a deep residual network model (RESNET) in a convolutional neural network as an example for explanation. In the process of model training, initial parameters are set firstly, then training is carried out by inputting training samples, the trained models are tested by using the testing samples after one-time training is finished, then whether the accuracy reaches preset values (for example, 90%, 99%, 99.5% and the like) is judged, if not, the super parameters (super parameters such as the order of a convolution kernel, the number of entrapment layers and the like) in the RESNET model are further adjusted until the accuracy of the testing results reaches or exceeds the preset values. Specifically, the method comprises the following steps:
1. inputting training samples in the fountain code sample set into a first model for training;
2. determining whether the accuracy of the trained first model meets a preset value or not according to the test samples in the fountain code sample set;
3. if not, adjusting the order of the convolution kernel of the first model and the number of the initiation layers, and continuously inputting training samples in the fountain code sample set into the adjusted first model for training; when the super-parameter is adjusted, the training state of the current model can be judged by observing monitoring indexes such as loss and accuracy in the training process, and the super-parameter is adjusted in time. The first target model meeting the accuracy requirement is ensured to be rapidly obtained through the adjustment of the hyper-parameter.
4. And if so, taking the trained first model as a first target model.
It should be noted that, the training process of other neural network models may refer to the existing training means, and is not described herein again.
Step S30: and modulating the fountain code sample set to obtain a modulation mode sample set.
In step S30, the modulation method sample set is data obtained by modulating the fountain code sample set in a certain modulation method. When the modulation mode sample set is divided into training samples and test samples, and the samples coded by the fountain codes and the samples not coded by the fountain codes included in the two samples are prepared in proportion, the method may be specifically performed with reference to the corresponding implementation of the fountain code sample set, and details are not repeated.
In step S30, it is ensured that the modulation mode sample set and the fountain code sample set are both from the same data source. A second target model obtained using a second model trained using the set of modulation mode samples may identify IQ data that is modulated and contains fountain codes.
Further, step S30 may include the following embodiments:
1. and modulating each data in the fountain code sample set according to a plurality of modulation modes and a plurality of signal-to-noise ratios to obtain a modulation mode data set.
2. And adding a corresponding modulation mode label to each data in the modulation mode data set to obtain the modulation mode sample set.
In this embodiment, the modulation method may include one or more of the following: BPSK, QPSK, 8PSK, PAM4, QAM16, QAM64, GFSK, and CPFSK; the signal to noise ratio includes any one or more of: -20, -18, -16, -14, -12, -10, -8, -6, -4, -2, 0, 2, 4, 6, 8, 10, 12, 14, 16 and 18.
For example, if the modulation schemes used in this embodiment are 8, the snr type is 20, each type of sample corresponds to 1000 samples, each sample is 128 IQ data that are continuously used, and the IQ data are two bits; then 8 x 20 x 1000 x 128 x 2 groups of float32 bit data can be generated after modulation by modulation mode and signal-to-noise ratio. Then, a modulation scheme sample set can be obtained by adding a modulation scheme label to each generated group of data, where the modulation scheme label may be a specific character or ID, for example, modulation scheme labels of 8 types of modulation schemes may sequentially be: 0. 1, 2, 3, 4, 5, 6 and 7.
The sequence of step S30, step S10, and step S20 is not limited.
Step S40: and inputting the modulation mode sample set into a preset second model for training to obtain a second target model, wherein the second model is a neural network model.
In step S40, the neural network model may also be: convolutional neural network models, cyclic neural networks, deep neural networks, and the like. Preferably, in this embodiment, the second model is a deep residual network model (RESNET) in a convolutional neural network. The training of the second model may refer to the training process of the first model, and will not be described herein again.
Finally, the first target model and the second target model may be constructed as a fountain code recognition model through step S50. The second target model is used for identifying the modulation mode of the IQ data, the IQ data is demodulated through the modulation mode identified by the second target model to obtain the original coded data, and the fountain code in the original coded data is identified through the first target model to determine whether the original coded data is coded by the fountain code.
In summary, in the training method for the fountain code recognition model provided by the invention, the first model is trained by using the fountain code sample set to obtain the first target model; and then modulating the fountain code sample set to obtain a modulation mode sample set, thereby ensuring that the fountain code sample set and the modulation mode sample set come from the same data source. Therefore, after the modulation mode of the IQ data is identified by the second target model obtained through the training of the modulation mode sample set, the fountain code can be further identified for the original encoded data obtained after the IQ data is demodulated by the first target model, that is, the fountain code identification model constructed by the first target model and the second target model can automatically identify the fountain code in the IQ data, and the identification accuracy is improved. Feature extraction is not needed to be carried out on IQ data before fountain code identification is carried out, and the dependency of an automatic modulation mode and fountain code identification on the knowledge in the communication professional field is reduced; while the identified IQ data may be non-cooperatively accepted data.
Second embodiment
Referring to fig. 2, a method for identifying a fountain code received in a non-cooperative manner, the fountain code identification model in the first embodiment can be applied to the method for identifying the fountain code. Specifically, the fountain code identification method includes:
step S101: receiving IQ data;
step S102: inputting the IQ data into the second target model for identification, and if the IQ data is successfully identified, obtaining a modulation mode corresponding to the IQ data;
step S103: demodulating the IQ data according to the modulation mode to obtain original coded data;
step S104: and inputting the original coding data into the first target model for identification, and if the original coding data is identified to be fountain codes, decoding to obtain original data corresponding to the IQ data.
With regard to the method in the present embodiment, the respective noun explanations/explanations mentioned therein have been described in detail in the first embodiment, and will not be explained in detail here.
Meanwhile, the beneficial effects produced by the method in this embodiment can be specifically referred to the method described in the first embodiment, and will not be described in detail here.
Third embodiment
Referring to fig. 3, in the present embodiment, a training apparatus 300 for a fountain code recognition model is provided, specifically, the apparatus 300 includes:
a fountain code sample set obtaining module 301, configured to obtain a fountain code sample set, where the fountain code sample set includes samples encoded by using a fountain code and samples encoded by not using the fountain code;
a first training module 302, configured to input the fountain code sample set into a preset first model for training, so as to obtain a first target model, where the first model is a neural network model;
a modulation mode sample set obtaining module 303, configured to modulate the fountain code sample set to obtain a modulation mode sample set;
a second training module 304, configured to input the modulation mode sample set into a preset second model for training, so as to obtain a second target model, where the second model is a neural network model;
a recognition model construction module 305, configured to construct the first target model and the second target model as a fountain code recognition model; the second target model is used for identifying a modulation mode of IQ data; the first target model is used for identifying fountain codes in original coded data, and the original coded data are data obtained by demodulating the IQ data in a demodulation mode identified by the second target model.
As an optional implementation, the fountain code sample set obtaining module 301 is further configured to:
acquiring a fountain code data set;
marking a first mark on a fountain code data set by adopting a fountain code coded sample;
marking a second mark for a sample which is not coded by the fountain code in the fountain code data set;
and using the fountain code data set marked with the first mark and the second mark as a fountain code sample set.
As an optional implementation manner, the modulation scheme sample set obtaining module 303 is further configured to:
modulating each data in the fountain code sample set according to a plurality of modulation modes and a plurality of signal-to-noise ratios to obtain a modulation mode data set;
and adding a corresponding modulation mode label to each data in the modulation mode data set to obtain the modulation mode sample set.
With regard to the apparatus in the present embodiment, the respective modules and their functions mentioned therein may specifically refer to those explained in the first embodiment, and will not be explained in detail here.
Fourth embodiment
Based on the same inventive concept, as shown in fig. 4, the embodiment provides a training apparatus 400 for a fountain code recognition model, which includes a memory 410, a processor 420, and a computer program 411 stored in the memory 410 and executable on the processor 420, and when the processor 420 executes the computer program 411, the following steps are implemented:
obtaining a fountain code sample set, wherein the fountain code sample set comprises samples coded by fountain codes and samples coded by fountain codes; inputting the fountain code sample set into a preset first model for training to obtain a first target model, wherein the first model is a neural network model; modulating the fountain code sample set to obtain a modulation mode sample set; inputting the modulation mode sample set into a preset second model for training to obtain a second target model, wherein the second model is a neural network model; constructing the first target model and the second target model into a fountain code identification model; the second target model is used for identifying a modulation mode of IQ data; the first target model is used for identifying fountain codes in original coded data, and the original coded data are data obtained by demodulating the IQ data in a demodulation mode identified by the second target model.
In a specific implementation process, when the processor 420 executes the computer program 411, any implementation manner in the first embodiment (or the third embodiment) may be implemented, which is not described herein again.
Fifth embodiment
Based on the same inventive concept, as shown in fig. 5, the present embodiment provides a computer-readable storage medium 500, on which a computer program 511 is stored, the computer program 511 implementing the following steps when executed by a processor:
obtaining a fountain code sample set, wherein the fountain code sample set comprises samples coded by fountain codes and samples coded by fountain codes; inputting the fountain code sample set into a preset first model for training to obtain a first target model, wherein the first model is a neural network model; modulating the fountain code sample set to obtain a modulation mode sample set; inputting the modulation mode sample set into a preset second model for training to obtain a second target model, wherein the second model is a neural network model; constructing the first target model and the second target model into a fountain code identification model; the second target model is used for identifying a modulation mode of IQ data; the first target model is used for identifying fountain codes in original coded data, and the original coded data are data obtained by demodulating the IQ data in a demodulation mode identified by the second target model.
In a specific implementation process, when the computer program 511 is executed by the processor, any implementation manner of the first embodiment (or the second embodiment) may be implemented, which is not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The method functions of the present invention may be stored in a computer-readable storage medium if they are implemented in the form of software function modules and sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes. It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (10)
1. A training method of a fountain code recognition model is characterized by comprising the following steps:
obtaining a fountain code sample set, wherein the fountain code sample set comprises samples coded by fountain codes and samples coded by fountain codes;
inputting the fountain code sample set into a preset first model for training to obtain a first target model, wherein the first model is a neural network model; wherein, include: inputting training samples in the fountain code sample set into the first model for training; determining whether the accuracy of the trained first model meets a preset value or not according to the test samples in the fountain code sample set; if so, taking the trained first model as the first target model;
modulating the fountain code sample set to obtain a modulation mode sample set;
inputting the modulation mode sample set into a preset second model for training to obtain a second target model, wherein the second model is a neural network model;
constructing the first target model and the second target model into a fountain code identification model; the second target model is used for identifying a modulation mode of IQ data; the first target model is used for identifying fountain codes in original coded data, and the original coded data are data obtained by demodulating the IQ data in a demodulation mode identified by the second target model.
2. The method of claim 1, wherein obtaining the set of fountain code samples comprises:
acquiring a fountain code data set;
marking a first mark on a fountain code data set by adopting a fountain code coded sample;
marking a second mark for a sample which is not coded by the fountain code in the fountain code data set;
and using the fountain code data set marked with the first mark and the second mark as a fountain code sample set.
3. The method of claim 1, wherein modulating the set of fountain code samples to obtain a set of modulation mode samples comprises:
modulating each data in the fountain code sample set according to a plurality of modulation modes and a plurality of signal-to-noise ratios to obtain a modulation mode data set;
and adding a corresponding modulation mode label to each data in the modulation mode data set to obtain the modulation mode sample set.
4. The method of claim 1, wherein the first model and the second model are both depth residual network models.
5. The method of claim 4, wherein the training by inputting the fountain code sample set into a preset first model to obtain a first target model comprises:
inputting training samples in the fountain code sample set into the first model for training;
determining whether the accuracy of the trained first model meets a preset value or not according to the test samples in the fountain code sample set;
if not, adjusting the order of the convolution kernel of the first model and the number of the initiation layers, and continuing to input training samples in the fountain code sample set into the adjusted first model for training; and obtaining the first target model until the accuracy of the trained first model meets a preset value.
6. The method of claim 1, wherein the set of fountain code samples have 50% of the samples encoded using fountain codes; the set of fountain code samples has 50% of the samples that are not encoded using fountain codes.
7. A training device for a fountain code recognition model is characterized by comprising:
the fountain code sampling set acquisition module is used for acquiring a fountain code sampling set, and the fountain code sampling set comprises samples coded by fountain codes and samples not coded by the fountain codes;
the first training module is used for inputting the fountain code sample set into a preset first model for training to obtain a first target model, wherein the first model is a neural network model; it is also specifically used for: inputting training samples in the fountain code sample set into the first model for training; determining whether the accuracy of the trained first model meets a preset value or not according to the test samples in the fountain code sample set; if so, taking the trained first model as the first target model;
a modulation mode sample set obtaining module, configured to modulate the fountain code sample set to obtain a modulation mode sample set;
the second training module is used for inputting the modulation mode sample set into a preset second model for training to obtain a second target model, wherein the second model is a neural network model;
the identification model construction module is used for constructing the first target model and the second target model into a fountain code identification model; the second target model is used for identifying a modulation mode of IQ data; the first target model is used for identifying fountain codes in original coded data, and the original coded data are data obtained by demodulating the IQ data in a demodulation mode identified by the second target model.
8. The apparatus of claim 7, wherein the fountain code sample set acquisition module is further configured to:
acquiring a fountain code data set;
marking a first mark on a fountain code data set by adopting a fountain code coded sample;
marking a second mark for a sample which is not coded by the fountain code in the fountain code data set;
and using the fountain code data set marked with the first mark and the second mark as a fountain code sample set.
9. The apparatus of claim 7, wherein the modulation scheme sample set obtaining module is further configured to:
modulating each data in the fountain code sample set according to a plurality of modulation modes and a plurality of signal-to-noise ratios to obtain a modulation mode data set;
and adding a corresponding modulation mode label to each data in the modulation mode data set to obtain the modulation mode sample set.
10. A fountain code identification method for uncooperative reception, wherein the fountain code identification model of any one of claims 1-8 is applied to the fountain code identification method, and the fountain code identification method comprises:
receiving IQ data;
inputting the IQ data into the second target model for identification, and if the IQ data is successfully identified, obtaining a modulation mode corresponding to the IQ data;
demodulating the IQ data according to the modulation mode to obtain original coded data;
and inputting the original coding data into the first target model for identification, and if the original coding data is identified to be fountain codes, decoding to obtain original data corresponding to the IQ data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910183728.5A CN109936423B (en) | 2019-03-12 | 2019-03-12 | Training method, device and recognition method of fountain code recognition model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910183728.5A CN109936423B (en) | 2019-03-12 | 2019-03-12 | Training method, device and recognition method of fountain code recognition model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109936423A CN109936423A (en) | 2019-06-25 |
CN109936423B true CN109936423B (en) | 2021-11-30 |
Family
ID=66986983
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910183728.5A Active CN109936423B (en) | 2019-03-12 | 2019-03-12 | Training method, device and recognition method of fountain code recognition model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109936423B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108282263A (en) * | 2017-12-15 | 2018-07-13 | 西安电子科技大学 | Coded modulation joint recognition methods based on one-dimensional depth residual error light weight network |
CN108600135A (en) * | 2018-04-27 | 2018-09-28 | 中国科学院计算技术研究所 | A kind of recognition methods of signal modulation mode |
CN108616470A (en) * | 2018-03-26 | 2018-10-02 | 天津大学 | Modulation Signals Recognition method based on convolutional neural networks |
WO2018176889A1 (en) * | 2017-03-27 | 2018-10-04 | 华南理工大学 | Method for automatically identifying modulation mode for digital communication signal |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
MX2011003100A (en) * | 2008-09-26 | 2011-04-19 | Ericsson Telefon Ab L M | Techniques for uplink cooperation of access nodes. |
JP6208016B2 (en) * | 2014-01-06 | 2017-10-04 | パナソニック株式会社 | Wireless communication apparatus and wireless communication method |
-
2019
- 2019-03-12 CN CN201910183728.5A patent/CN109936423B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018176889A1 (en) * | 2017-03-27 | 2018-10-04 | 华南理工大学 | Method for automatically identifying modulation mode for digital communication signal |
CN108282263A (en) * | 2017-12-15 | 2018-07-13 | 西安电子科技大学 | Coded modulation joint recognition methods based on one-dimensional depth residual error light weight network |
CN108616470A (en) * | 2018-03-26 | 2018-10-02 | 天津大学 | Modulation Signals Recognition method based on convolutional neural networks |
CN108600135A (en) * | 2018-04-27 | 2018-09-28 | 中国科学院计算技术研究所 | A kind of recognition methods of signal modulation mode |
Non-Patent Citations (3)
Title |
---|
"Compilation of all Rel-13 WIDs";Alain Sultan;《3GPP TSG Meeting #73 SP-160685/CP-160559/RP-161800》;20160922;全文 * |
"Performance of AdaBoost classifier in recognition of superposed modulations for MIMO TWRC with physical-layer network coding";Wassim等;《2017 25th International Conference on Software, Telecommunications and Computer Networks (SoftCOM)》;20170922;全文 * |
"基于深度学习的调制识别技术研究";赵纪伟;《中国优秀硕士学位论文全文数据库信息科技辑I136-217》;20190228;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN109936423A (en) | 2019-06-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
DE102007051367A1 (en) | Digital data communicating method, involves receiving modulated carrier signal transmitted via wireless telecommunications network, and demodulating received modulated carrier signal back into digital data | |
CN110647456B (en) | Fault prediction method, system and related device of storage equipment | |
CA3097604A1 (en) | Interference detection and suppression in non-coordinated systems | |
EP3640851A1 (en) | Two-dimensional code error correction decoding | |
CN104869096A (en) | Bootstrap-based BPSK signal blind processing result credibility test method | |
RU2015120649A (en) | METHOD AND DEVICE FOR DEMODULATION OF SIGNALS GFSK-MODULATED BY Q STATES | |
DE10342192B4 (en) | Method and apparatus for calculating zero-crossing reference sequences for the signal detection of angle-modulated signals on the basis of zero crossings of the received signal | |
CN108633026A (en) | A kind of wave beam restoration methods and device | |
CN109936423B (en) | Training method, device and recognition method of fountain code recognition model | |
EP1049073B1 (en) | Method of operating a vocoder | |
CN112202694B (en) | Estimation method and system of frequency offset value based on signal reconstruction | |
CN114548201A (en) | Automatic modulation identification method and device for wireless signal, storage medium and equipment | |
CN117176526A (en) | GFSK two-point modulation method, device, equipment and storage medium | |
JP3701291B2 (en) | Demodulator in communication system using 8-aryPSK modulation system | |
CN115358410A (en) | Method, device and equipment for enhancing field of pre-training model and storage medium | |
CN114826850A (en) | Modulation identification method, device and equipment based on time-frequency diagram and deep learning | |
CN114006798A (en) | Signal processing method and device, electronic equipment and storage medium | |
CN106656423A (en) | LDPC code decoding noise variance estimation method based on EM algorithm | |
Speer et al. | FFT based algorithm to demodulate high frequency chirp signals | |
Medvedevs et al. | Simulating the physical layer of body-coupled communication protocols | |
CN110365445B (en) | Sequence determination method, device and encoder | |
CN116232544B (en) | Data block processing method, device, processor and user terminal | |
JP7209915B2 (en) | Modulation method identification device, control circuit, storage medium, and modulation method identification method | |
US11817988B2 (en) | Demodulation of frequency-hopping spread spectrum (FHSS) signals using sequential artificial intelligence/machine learning (AI/ML) models | |
CN111739543B (en) | Debugging method of audio coding method and related device thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |