Nothing Special   »   [go: up one dir, main page]

CN116796284A - Unmanned aerial vehicle cluster-oriented intention recognition method applied to countermeasure environment - Google Patents

Unmanned aerial vehicle cluster-oriented intention recognition method applied to countermeasure environment Download PDF

Info

Publication number
CN116796284A
CN116796284A CN202310635829.8A CN202310635829A CN116796284A CN 116796284 A CN116796284 A CN 116796284A CN 202310635829 A CN202310635829 A CN 202310635829A CN 116796284 A CN116796284 A CN 116796284A
Authority
CN
China
Prior art keywords
unmanned aerial
aerial vehicle
cluster
graph
unmanned
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310635829.8A
Other languages
Chinese (zh)
Inventor
彭志红
何辉
陈杰
王文杰
尚沛桥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202310635829.8A priority Critical patent/CN116796284A/en
Publication of CN116796284A publication Critical patent/CN116796284A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses an intention recognition method for unmanned aerial vehicle clusters in an antagonistic environment. Firstly, threat coefficient similarity and distance similarity masks between unmanned aerial vehicles are calculated by using physical characteristics of unmanned aerial vehicle clusters, and battlefield states are mapped into a graph structure. And then, the graph neural network is utilized to introduce an attention mechanism to process graph structure information, the mutual influence weight between the neighbor unmanned aerial vehicles is calculated, and the unmanned aerial vehicle characteristic information is deeply extracted. And fusing the extracted feature expression by using a pooling method to obtain the comprehensive feature description of the unmanned aerial vehicle cluster. And then high-precision intention classification recognition is realized through supervised learning by utilizing full connection. The invention can identify the intention of the unmanned aerial vehicle cluster under the condition of complex battlefield environment and various deception information. The method has the advantages that the problem of intention recognition of the unmanned aerial vehicle cluster is transferred to the field of graph data processing, and a graph neural network method is introduced, so that the difficulty that expert experience is excessively relied on in the field of intention recognition and recognition accuracy is low is solved.

Description

一种应用于对抗环境下面向无人机集群的意图识别方法An intent recognition method for UAV swarms in a confrontational environment

技术领域Technical field

本发明涉及一种应用于对抗环境下面向无人机集群的意图识别方法,属于无人机的意图识别技术领域。The invention relates to an intention recognition method for UAV clusters in a confrontational environment, and belongs to the technical field of UAV intention identification.

背景技术Background technique

无人化装备越来越受到人们的青睐,无人机的重要性也得以在现代战争中体现。在无人机集群作战带来巨大规模优势的前提下,针对无人机集群作战的反制方法研究也是一项重要课题。但是防空领域存在“预警探测难、指挥控制难”的问题,而无人机集群多为微小型无人机,具有红外特征弱、雷达反射面小、飞行高度低等特点,再加上集群系统大多有自主编队、智能决策等自适应能力,使得无人机集群的反制更加困难。面对种种挑战,敌对无人机及其集群的意图识别技术变得尤为关键。准确快速地预测敌方作战意图有利于空战攻防决策系统做出更恰当的决策行为,能够帮助己方夺取制空权甚至是战争的胜利。Unmanned equipment is becoming more and more popular among people, and the importance of drones is also reflected in modern warfare. Under the premise that UAV swarm operations bring huge scale advantages, research on countermeasures for UAV swarm operations is also an important topic. However, in the field of air defense, there are problems such as "difficulty in early warning detection and command and control". UAV swarms are mostly micro-UAVs with weak infrared characteristics, small radar reflection surfaces, and low flight altitudes. In addition, the swarm system Most of them have adaptive capabilities such as autonomous formation and intelligent decision-making, making countermeasures against UAV swarms more difficult. Faced with various challenges, the technology to identify the intentions of hostile drones and their swarms has become particularly critical. Accurately and quickly predicting the enemy's combat intentions will help the air combat offensive and defensive decision-making system make more appropriate decision-making behaviors, and can help our own side seize air supremacy and even win the war.

意图识别问题中,集群的意图是通过复数的无人机个体的一系列时序动作展现的,由于集群内部交互关系隐蔽繁杂,战场环境复杂多变的情况下实现对集群的意图识别成为了亟待解决的问题。In the problem of intention recognition, the intention of the cluster is displayed through a series of time-series actions of a plurality of individual drones. Due to the hidden and complicated interactions within the cluster, it is an urgent matter to realize the intention recognition of the cluster in the complex and changeable battlefield environment. The problem.

发明内容Contents of the invention

本发明的技术解决问题是:克服现有技术的不足,提出一种应用于对抗环境下面向无人机集群的意图识别方法,该方法能够在集群内部交互关系隐蔽繁杂,战场环境复杂多变的情况下,实现对集群意图的准确识别。在数据预处理方面,采用威胁系数相似度和距离相似度掩码方法构建集群的图结构数据。在意图识别方面,采用图注意力机制方法和图池化方法对集群无人机的深层特征进行提取,再结合全连接网络对无人机集群的意图进行识别。The technical problem solved by the present invention is to overcome the shortcomings of the existing technology and propose an intention recognition method for UAV clusters in a confrontational environment. This method can conceal complex interactions within the cluster and the battlefield environment is complex and changeable. Under such circumstances, accurate identification of cluster intentions is achieved. In terms of data preprocessing, threat coefficient similarity and distance similarity masking methods are used to construct clustered graph structure data. In terms of intent recognition, the graph attention mechanism method and the graph pooling method are used to extract the deep features of the clustered drones, and then combined with the fully connected network to identify the intent of the drone cluster.

为实现上述目的,本发明提供的技术方案为:In order to achieve the above objects, the technical solutions provided by the present invention are:

一种应用于对抗环境下面向无人机集群的意图识别方法,包括以下步骤:An intent recognition method for UAV swarms applied in a confrontational environment, including the following steps:

第一步,根据无人机集群同一时刻的物理特征计算得到无人机集群的威胁系数和空间距离掩码,并根据计算得到的无人机集群的威胁系数和空间距离掩码构建无人机集群的图结构数据形式;The first step is to calculate the threat coefficient and spatial distance mask of the UAV cluster based on the physical characteristics of the UAV cluster at the same time, and construct the UAV based on the calculated threat coefficient and spatial distance mask of the UAV cluster. Graph structured data form of clusters;

第二步,使用图注意力网络处理第一步得到的图结构数据,得到每架无人机的深层特征信息;In the second step, use the graph attention network to process the graph structure data obtained in the first step to obtain the deep feature information of each drone;

第三步,根据第二步得到的每架无人机的深层特征信息获得整个无人机集群的深层特征信息,并将获得的整个无人机集群的深层特征信息输入到全连接网络中,通过有监督的训练学习之后实现无人机集群意图的分类,无人机集群意图的分类包括进攻、佯攻、撤退、电子干扰、低空突袭。The third step is to obtain the deep feature information of the entire UAV cluster based on the deep feature information of each UAV obtained in the second step, and input the obtained deep feature information of the entire UAV cluster into the fully connected network. After supervised training and learning, the classification of UAV swarm intentions is achieved. The classification of UAV swarm intentions includes attack, feint attack, retreat, electronic jamming, and low-altitude raid.

所述第一步中,物理特征包括每架无人机的空间位置、航向角、仰俯角、速度、雷达开启状态、无人机种类;In the first step, the physical characteristics include the spatial position, heading angle, pitch angle, speed, radar on status, and drone type of each drone;

所述第一步中,设蓝机集群B共有M架无人机Bi(=1,2…M),红机集群R共有N架无人机Rj(j=1,2…N),无人机Rj对无人机Bi的威胁系数包括距离威胁系数角度威胁系数/>速度威胁系数/>和空战效能威胁系数thRjiRij,公式如下:In the first step, assume that the blue machine cluster B has a total of M drones B i (=1,2...M), and the red machine cluster R has a total of N drones R j (j=1,2...N) , the threat coefficient of UAV R j to UAV B i includes distance threat coefficient Angle threat coefficient/> Speed threat coefficient/> and air combat effectiveness threat coefficient thR ji R ij , the formula is as follows:

式中,thRjiDji∈[0,1],Dji为无人机Rj与无人机Bi之间的距离;λ1、λ2、ε为常数参数;RaR为红机导弹的有效作用距离;TrR为红机导弹的最大作用距离;/>为无人机Bi的速度;/>为无人机Rj的速度;Ci为无人机Bi的空战效能指标,Cj为无人机Rj的空战效能指标;In the formula, thR ji D ji ∈[0,1], D ji is the distance between UAV R j and UAV B i ; λ 1 , λ 2 , ε are constant parameters; R aR is the effective range of the red aircraft missile; T rR is the maximum effect of the red aircraft missile distance;/> is the speed of drone B i ;/> is the speed of UAV R j ; C i is the air combat effectiveness index of UAV B i , and C j is the air combat effectiveness index of UAV R j ;

设红方无人机Rj与红方无人机Rj′(j′=1,2…N)之间的距离掩码为DIjj′,其公式如下所示:Assume that the distance mask between the red drone R j and the red drone R j′ (j′=1,2…N) is DI jj′ , and its formula is as follows:

DIjj′=INT(d(Rj′,Rj)≤μ)DI jj′ =INT(d(R j′ ,R j )≤μ)

其中,μ为常数,μ∈[0.6,0.75];INT表示真值转化函数;d(Rj′,Rj)表示无人机Rj与红方无人机Rj′的相对距离;Among them, μ is a constant, μ∈[0.6,0.75]; INT represents the true value conversion function; d(R j′ ,R j ) represents the relative distance between the drone R j and the red drone R j′ ;

根据距离威胁系数、角度威胁系数、速度威胁系数和空战效能威胁系数以及距离掩码,构建的无人机集群的图结构数据为:According to the distance threat coefficient, angle threat coefficient, speed threat coefficient, air combat effectiveness threat coefficient and distance mask, the graph structure data of the constructed UAV cluster is:

其中,j″=1,2…N,thjj′为利用距离威胁系数、角度威胁系数、速度威胁系数和空战效能威胁系数,通过灰色关联矩阵的方法计算得到的无人机Rj和无人机Rj′对蓝方的综合威胁系数;Among them, j″=1,2…N, th jj′ is the UAV R j and UAV calculated through the gray correlation matrix method using the distance threat coefficient, angle threat coefficient, speed threat coefficient and air combat effectiveness threat coefficient. The comprehensive threat coefficient of machine R j′ to blue team;

Gjj′∈[0,1]G jj′∈ [0,1]

选取得到的Gjj′的最大值,设Gjz为最大值,则将无人机Rj与无人机Rz相连,完成对无人机集群的图数据结构构建;Select the maximum value of G jj′ obtained, and set G jz as the maximum value, then connect the drone R j to the drone R z to complete the construction of the graph data structure of the drone cluster;

所述第二步中,获取深层特征信息的方法为:计算无人机Rj和无人机Rj′之间的注意力系数,并根据计算得到的注意力系数得到深层特征信息;In the second step, the method of obtaining deep feature information is: calculating the attention coefficient between the drone R j and the drone R j′ , and obtaining the deep feature information based on the calculated attention coefficient;

其中,注意力系数αjj′的计算方法为:使用一个可学习的共享参数W对相连的无人机Rj和无人机Rj′进行特征增强,然后对增强后的特征进行拼接,将无人机Rj和无人机Rj′的特征组合成一个向量,最后使用映射α将合成后的向量映射到一个实数上,通过SoftMax归一化后得到相连的无人机Rj和无人机Rj′的注意力系数,公式为:Among them, the calculation method of the attention coefficient α jj′ is: use a learnable shared parameter W to enhance the features of the connected UAV R j and UAV R j′ , and then splice the enhanced features to The characteristics of UAV R j and UAV R j′ are combined into a vector. Finally, mapping α is used to map the synthesized vector to a real number. After normalization by SoftMax, the connected UAV R j and UAV R j ′ are obtained. The attention coefficient of human-machine R j′ , the formula is:

其中,为与无人机Rj相连的红机集群中的其他无人机的集合,/>ejj′表示拼接特征的映射;in, is the set of other drones in the red machine cluster connected to drone R j , /> e jj′ represents the mapping of spliced features;

ejj′=α([Whj||Whj′])e jj′ =α([Wh j ||Wh j′ ])

hj为无人机Rj的物理特征的集合;h j is the set of physical characteristics of UAV R j ;

无人机Rj的深层特征信息为:Deep feature information of UAV R j for:

所述第三步中,根据每架无人机的深层特征信息通过图池化操作获得整个无人机集群的深层特征信息,具体为:设A(l)为池化层的第l层所对应的邻接矩阵,X(l)为第l层中所有节点的特征矩阵,Z(l)为第l层图上提取出的每个节点上的特征,那么下一层的图结构由下式得出:In the third step, the deep feature information of the entire UAV cluster is obtained through the graph pooling operation based on the deep feature information of each UAV. Specifically, let A (l) be the lth layer of the pooling layer. For the corresponding adjacency matrix , inferred:

A(l+1),X(l+1)=DiffPool(A(l),Z(l))A (l+1) ,X (l+1) =DiffPool(A (l) ,Z (l) )

其中,DiffPool表示通过一个图神经网络来求解权重矩阵S(l),S(l)用数值化的方式表示上一层中的节点各自以多大的权重分配到下一层的节点中,求取公式如下所示:Among them, DiffPool means to solve the weight matrix S (l) through a graph neural network. S (l) numerically represents the weight of each node in the upper layer to be allocated to the nodes in the next layer. Find The formula looks like this:

S(l)=softmax(GNNl,pool(A(l),X(l)))S (l) =softmax(GNN l,pool (A (l) ,X (l) ))

其中,GNNl,pool表示图池化所使用到的图神经网络,图神经网络中层与层之间的迭代公式为:Among them, GNN l,pool represents the graph neural network used in graph pooling. The iteration formula between layers in the graph neural network is:

图神经网络的优化约束函数包括连接正则项约束和熵函数正则项约束,其中,连接正则项约束的表示公式为:The optimization constraint function of the graph neural network includes the connection regular term constraint and the entropy function regular term constraint. The expression formula of the connection regular term constraint is:

LLP=||A(l),S(l)S(l)T||F L LP =||A (l) ,S (l) S (l)T || F

熵函数正则项约束的表示公式为:The expression formula of the regular term constraint of the entropy function is:

其中,n表示当前池化层的节点数,p=1,2…n;Among them, n represents the number of nodes in the current pooling layer, p=1,2...n;

所述无人机集群包括多旋翼诱骗无人机、查打一体化无人机、多旋翼中继无人机和小型侦查无人机。The UAV cluster includes multi-rotor decoy UAVs, integrated investigation and attack UAVs, multi-rotor relay UAVs and small reconnaissance UAVs.

有益效果beneficial effects

(1)本发明的方法通过无人机构图方法结合无人机集群的物理信息构建出面向无人机集群的图结构模型来描述无人机集群内部的作用关系。(1) The method of the present invention uses the drone mapping method combined with the physical information of the drone cluster to build a graph structure model for the drone cluster to describe the internal functional relationships of the drone cluster.

(2)本发明的方法通过图神经网络的方法,求解出集群中两两无人机之间的相互影响权重,再结合图结构中的无人机交互作用关系,计算提取出无人机集群中每架无人机的深层隐藏特征。(2) The method of the present invention uses the graph neural network method to solve for the mutual influence weight between two drones in the cluster, and then combines the interaction relationship between drones in the graph structure to calculate and extract the drone cluster. The deep hidden characteristics of each drone in .

(3)本发明的方法采用图池化方法,融合无人机集群中的个体特征,采用融合后的最终节点特征作为无人机集群的总体特征。再根据全连接分类网络方法,处理融合后的总体特征,对无人机集群进行分类。(3) The method of the present invention adopts the graph pooling method to integrate individual characteristics in the UAV cluster, and uses the final node characteristics after fusion as the overall characteristics of the UAV cluster. Then, according to the fully connected classification network method, the fused overall features are processed to classify the UAV cluster.

(4)本发明的方法中,威胁系数是描述每架无人机对于敌方的威胁程度,其相似度则是表述每架无人机之间的协同程度。空间距离掩码则是通过比较无人机之间的间隔和遮掩阈值来对无人机关系进行遮挡。通过该方法建立的面向无人机集群的图结构模型是后续特征提取、信息浓缩和意图识别的基础。(4) In the method of the present invention, the threat coefficient describes the threat level of each UAV to the enemy, and the similarity degree expresses the degree of coordination between each UAV. Spatial distance mask blocks the relationship between drones by comparing the distance between drones and the masking threshold. The graph structure model for UAV clusters established by this method is the basis for subsequent feature extraction, information concentration and intent recognition.

(5)本发明的方法中,通过最小化约束函数,在迭代过程中,将无人机集群浓缩至一个节点。迭代过程中存在两个优化约束思想,首先是要使相互作用程度更紧密的两个无人机更容易被映射到下一层的同一个浓缩节点上,其次是尽量避免出现无人机对下一层所有的浓缩节点的映射权重相等的情况。在得到最终的集群浓缩节点特征后,通过全连接网络就可以对集群进行意图识别。(5) In the method of the present invention, by minimizing the constraint function, the UAV cluster is condensed to one node in the iterative process. There are two optimization constraint ideas in the iterative process. The first is to make it easier for two UAVs with closer interactions to be mapped to the same enrichment node in the next layer. The second is to try to avoid the occurrence of UAV pairs. The situation where the mapping weights of all condensed nodes in a layer are equal. After obtaining the final cluster condensed node characteristics, the cluster can be identified through the fully connected network.

(6)相比于依赖专家经验的模板匹配、D-S证据推理等经典方法,本发明通过引入有监督的神经网络方法,消除了对专家经验的依赖性,不会因为缺少可靠,完整且匹配的专家经验而造成准确率的大幅下降。在由无人机集群的物理特征构建集群的图数据结构时,本发明提出了空间距离掩码来降低复杂度。根据集群成员之间的空间距离阈值进行分组建模,将整个组群化整为零,不仅降低了交互关系推理的复杂度,还提升了交互关系建模的精细程度。相比于简单、直接地对无人机及其邻居加权求和,本发明更加注重集群的相互作用关系,利用第二步提出的图注意力机制方法,考虑了无人机之间的不均等影响,实现了无人机之间的信息特征共享,能够提取出更有代表性的无人机深层特征。相比于经典方法中直接采用集群中所有无人机的特征平均值表示无人机集群的特征信息,本发明能够充分考虑无人机集群内部的关联作用关系,利用提出的图池化方法可以不断寻找相似关联节点并给出融合方案,保证了浓缩的合理性。通过本发明的池化方法可以很好地在粗粒度方面提取图的结构信息,保留更能表现集群信息的特征向量。(6) Compared with classic methods such as template matching and D-S evidence reasoning that rely on expert experience, the present invention eliminates dependence on expert experience by introducing a supervised neural network method, and will not be affected by the lack of reliable, complete and matching Expert experience leads to a significant drop in accuracy. When constructing the graph data structure of the cluster from the physical characteristics of the UAV cluster, the present invention proposes a spatial distance mask to reduce complexity. Group modeling is performed based on the spatial distance threshold between cluster members, and the entire group is broken into parts, which not only reduces the complexity of interactive relationship reasoning, but also improves the sophistication of interactive relationship modeling. Compared with simply and directly weighting the sum of UAVs and their neighbors, this invention pays more attention to the interaction relationship of clusters and uses the graph attention mechanism method proposed in the second step to consider the inequality between UAVs. It realizes the sharing of information features between drones and can extract more representative deep features of drones. Compared with the classic method that directly uses the characteristic average of all UAVs in the cluster to represent the characteristic information of the UAV cluster, this invention can fully consider the correlation within the UAV cluster and use the proposed graph pooling method to Continuously search for similar associated nodes and provide fusion solutions to ensure the rationality of concentration. Through the pooling method of the present invention, the structural information of the graph can be well extracted at a coarse-grained level, and feature vectors that can better express cluster information can be retained.

(7)本发明公开了一种应用于对抗环境下面向无人机集群的意图识别方法。该方法基于深度图神经网络构建战场意图识别模型,主要包含以下步骤:首先利用无人机集群的方向角、位置、速度等物理特征计算无人机之间的威胁系数相似度和距离相似度掩码,将战场状态映射为图结构。之后,利用图神经网络引入注意力机制处理图结构信息,计算出邻居无人机之间的相互影响权重,对无人机特征信息进行深层提取。之后,利用图池化方法对提取出的特征表达进行融合,得到无人机集群的综合特征描述。最后再利用全连接等模块构建学习网络,通过监督学习实现高精度的意图分类识别。本发明能在战场环境复杂,欺骗信息繁多的情况下,对无人机集群的意图进行识别。将无人机集群的意图识别问题转移到图数据处理领域,引入图神经网络方法,解决了意图识别领域过度依靠专家经验和识别精度较低的困难。(7) The present invention discloses an intention recognition method for UAV clusters in a confrontational environment. This method builds a battlefield intention recognition model based on the depth graph neural network, which mainly includes the following steps: first, the physical characteristics such as direction angle, position, speed and so on of the UAV cluster are used to calculate the threat coefficient similarity and distance similarity mask between the UAVs. code to map the battlefield state into a graph structure. Afterwards, the graph neural network is used to introduce the attention mechanism to process the graph structure information, calculate the mutual influence weights between neighboring drones, and perform deep extraction of drone feature information. Afterwards, the graph pooling method is used to fuse the extracted feature expressions to obtain a comprehensive feature description of the UAV cluster. Finally, modules such as full connection are used to build a learning network, and high-precision intent classification and recognition is achieved through supervised learning. The invention can identify the intention of the UAV cluster under the condition that the battlefield environment is complex and there is a lot of deception information. The problem of intention recognition of drone clusters is transferred to the field of graph data processing, and the graph neural network method is introduced, which solves the difficulty of over-reliance on expert experience and low recognition accuracy in the field of intention recognition.

附图说明Description of the drawings

图1为空战态势示意图;Figure 1 is a schematic diagram of the air combat situation;

图2为注意力机制示意图;Figure 2 is a schematic diagram of the attention mechanism;

图3为特征提取加权示意图;Figure 3 is a weighted schematic diagram of feature extraction;

图4为图池化示意图。Figure 4 is a schematic diagram of graph pooling.

具体实施方式Detailed ways

下面结合附图及实施具体实例对本发明进行详细说明。The present invention will be described in detail below with reference to the accompanying drawings and specific implementation examples.

一种应用于对抗环境下面向无人机集群的意图识别方法,包括以下步骤:An intent recognition method for UAV swarms applied in a confrontational environment, including the following steps:

步骤一:根据无人机集群的物理特征建立集群的图数据结构Step 1: Establish the graph data structure of the cluster based on the physical characteristics of the UAV cluster

本发明采用威胁系数相似度和空间距离掩码来构建集群的图数据结构。接下来以蓝方无人机群B与红方无人机群R举例,说明图数据结构的建立流程。设集群B共有M架无人机,集群R共有N架无人机。Bi(=1,2…M)与Rj(j=1,2…N)的攻击态势如图1所示。设与thRjiRij分别为无人机Rj对无人机Bi的距离威胁系数、角度威胁系数、速度威胁系数和空战效能威胁系数,其公式分别如下所示:The present invention uses threat coefficient similarity and spatial distance mask to construct the graph data structure of the cluster. Next, the blue UAV group B and the red UAV group R are used as examples to illustrate the establishment process of the graph data structure. Assume that cluster B has a total of M drones, and cluster R has a total of N drones. The attack postures of B i (=1,2...M) and R j (j=1,2...N) are shown in Figure 1. set up and thR ji R ij are the distance threat coefficient, angle threat coefficient, speed threat coefficient and air combat effectiveness threat coefficient of UAV R j to UAV B i respectively. Their formulas are as follows:

式中:thRjiDji∈[0,1],Dji为无人机之间的距离;λ1、λ2为常数参数;RaR为红机导弹的有效作用距离;TrR为红机导弹的最大作用距离;/>为无人机Bi的速度;/>为无人机Rj的速度;Ci为无人机Bi的空战效能指标,Cj为无人机Rj的空战效能指标;In the formula: thR ji D ji ∈[0,1], D ji is the distance between UAVs; λ 1 and λ 2 are constant parameters; R aR is the effective range of the red aircraft missile; T rR is the maximum action range of the red aircraft missile;/> is the speed of drone B i ;/> is the speed of UAV R j ; C i is the air combat effectiveness index of UAV B i , and C j is the air combat effectiveness index of UAV R j ;

由于来自邻居无人机的状态信息比来自其他位置的无人机状态信息更重要。因此,当两架无人机的距离超过一定阈值的时候,可以认为他们没有关系。所以本发明提出了距离掩码的方式,来求解位置相似度。设第j架红方无人机与第j′架红方无人机之间的距离掩码为DIjj′,其公式如下所示:Since the status information from neighbor drones is more important than the status information of drones from other locations. Therefore, when the distance between two drones exceeds a certain threshold, they can be considered to have no relationship. Therefore, the present invention proposes a distance mask method to solve the position similarity. Assume that the distance mask between the j-th red-side UAV and the j′-th red-side UAV is DI jj′ , and its formula is as follows:

DIjj′=INT(d(Rj′,Rj)≤μ)#(5)DI jj′ =INT(d(R j′ ,R j )≤μ)#(5)

其中,μ为常数,μ∈[0.6,0.75]。Among them, μ is a constant, μ∈[0.6,0.75].

根据距离威胁系数、角度威胁系数、速度威胁系数和空战效能威胁系数并结合距离掩码后,可以给出构图公式:Based on the distance threat coefficient, angle threat coefficient, speed threat coefficient and air combat effectiveness threat coefficient combined with the distance mask, the composition formula can be given:

公式的建立仿照了sigmoid函数的形式,所有的Gjj′的大小都是在0到1之间。计算出第j架红方无人机与其他所有红机的关系度量值后,选取其中的最大值Gzj,将第j架红机与其代表的第z架红机相连,进而完成对无人机集群的图数据结构构建。The establishment of the formula imitates the form of the sigmoid function, and the sizes of all G jj′ are between 0 and 1. After calculating the relationship metric between the j-th red drone and all other red drones, select the maximum value G zj and connect the j-th red drone to the z-th red drone it represents, thereby completing the unmanned aerial vehicle control. Construction of graph data structure for machine clusters.

步骤二:利用图注意力网络对无人机深层特征进行提取Step 2: Use graph attention network to extract deep features of UAV

在集群中进行无人机特征提取时会遇到以下问题:对于集群中的每架无人机来说,其邻居无人机的影响权重是不一样的,如果忽视这一问题,那么无人机就无法有效利用到邻居无人机的特征信息。比如直接对邻居无人机的特征进行加权求和,那么一定会忽略掉无人机集群交互关系中所隐含的信息。因此在步骤二中,需要先根据无人机交互的图数据结构计算求解无人机彼此之间的注意力系数,再结合注意力系数对无人机的深层特征进行提取。The following problems will be encountered when extracting UAV features in a cluster: For each UAV in the cluster, the influence weight of its neighbor UAVs is different. If this problem is ignored, then UAV The drone cannot effectively utilize the characteristic information of neighboring drones. For example, if you directly perform a weighted summation of the characteristics of neighbor drones, you will definitely ignore the information implicit in the interaction relationship of the drone cluster. Therefore, in step two, it is necessary to first calculate and solve the attention coefficients between drones based on the graph data structure of drone interaction, and then combine the attention coefficients to extract the deep features of the drones.

注意力系数αjj′的提取方法如图2所示,具体的流程为:先使用一个可学习的共享参数W对两架相连无人机Rj和无人机Rj′进行特征增强,对增强后的特征进行拼接将无人机Rj和无人机Rj′的特征组合成一个向量,最后使用映射α将合成后的向量映射到一个实数上,通过SoftMax归一化后就可以得到相连的无人机Rj和无人机Rj′的注意力系数。其公式为:The extraction method of attention coefficient α jj′ is shown in Figure 2. The specific process is: first use a learnable shared parameter W to enhance the features of two connected UAVs R j and UAV R j′ . The enhanced features are spliced to combine the features of UAV R j and UAV R j′ into a vector. Finally, mapping α is used to map the synthesized vector to a real number, which can be obtained after normalization by SoftMax. The attention coefficient of the connected UAV R j and UAV R j′ . The formula is:

其中,为与无人机Rj相连的红机集群中的其他无人机的集合,/>ejj′表示拼接特征的映射,公式如下所示:in, is the set of other drones in the red machine cluster connected to drone R j , /> e jj′ represents the mapping of spliced features, and the formula is as follows:

ejj′=a([Whj||Whj′])#(8)e jj′ =a([Wh j ||Wh j′ ])#(8)

如图3所示,在得到所有邻居无人机的注意力系数之后,通过注意力系数与原始物理特征的加权求和就可以得到无人机在融合邻域信息之后的深层隐含特征。具体的公式如下所示:As shown in Figure 3, after obtaining the attention coefficients of all neighbor UAVs, the deep implicit features of the UAV after fusing neighborhood information can be obtained through the weighted sum of the attention coefficients and the original physical features. The specific formula is as follows:

步骤三:对集群的图数据结构进行图池化操作并进行分类Step 3: Perform graph pooling operations on the graph data structure of the cluster and classify it

图池化操作类似于卷积神经网络中的池化层,可以不断地浓缩特征,生成更高层次的数据。图池化的流程图如图4所示,每通过一次图池化,图中标注颜色相同的节点都会进行特征融合,图结构中的节点个数都会减少。在经过最后一次池化操作后,只会保留最后一个节点,之后,可以利用这个节点的特征信息对集群的意图进行预测。The graph pooling operation is similar to the pooling layer in a convolutional neural network, which can continuously condense features and generate higher-level data. The flow chart of graph pooling is shown in Figure 4. Each time graph pooling is passed, nodes marked with the same color in the graph will undergo feature fusion, and the number of nodes in the graph structure will be reduced. After the last pooling operation, only the last node will be retained. Later, the characteristic information of this node can be used to predict the intention of the cluster.

具体的池化操作可以用以下公式表示。设A(l)为第l层所对应的邻接矩阵,X(l)为第l层中所有节点的特征矩阵,Z(l)为第l层图上提取出的每个节点上的特征,那么下一层的图结构可以由下式得出:The specific pooling operation can be expressed by the following formula. Let A (l) be the adjacency matrix corresponding to the l-th layer, X (l) be the feature matrix of all nodes in the l-th layer, Z (l) be the feature of each node extracted on the l-th layer graph, Then the graph structure of the next layer can be obtained by the following formula:

A(l+1),X(l+1)=DiffPool(A(l),Z(l))#(10)A (l+1) ,X (l+1) =DiffPool(A (l) ,Z (l) )#(10)

其中DiffPool表示的操作为,通过一个图神经网络来求解权重矩阵,用数值化的方式表示上一层中的节点各自以多大的权重分配到下一层的节点中,求取公式如下所示:The operation represented by DiffPool is to solve the weight matrix through a graph neural network, and numerically represent the weight of each node in the upper layer to be allocated to the nodes in the next layer. The formula is as follows:

S(l)=softmax(GNNl,pool(A(l),X(l)))#(11)S (l) =softmax(GNN l,pool (A (l) ,X (l) ))#(11)

接下来,按照学习到的权重矩阵进行加权聚合就可以得到下一层的图结构,公式如下所示:Next, perform weighted aggregation according to the learned weight matrix to obtain the next layer of graph structure. The formula is as follows:

同时,为了降低网络的学习难度,需要增加正则项对网络进行约束。首先需要进行连接正则项约束,公式如式(14)所示,其中,表示第层节点和节点之间的连接强度。两个节点之间连接越强,/>的数值越大。/>表示S(l)的第行和/>的第列的对应元素相乘再相加,也就是说两个节点之间映射到同一个浓缩节点的概率越大,所对应的的数值越大。因此,通过最小化这项约束,可以使得连接强度越大的两节点更容易被映射到下一层的同一个节点上。其次是添加熵函数正则项约束,公式如式(15)所示。熵表示体系分布的混乱程度,通过减小熵的方式增加映射分布的不确定性。避免出现节点对下一层所有节点的映射权重相等的情况。At the same time, in order to reduce the learning difficulty of the network, regular terms need to be added to constrain the network. First, the connection regular term constraint needs to be carried out, and the formula is shown in Equation (14), where, Represents the layer node and the connection strength between nodes. The stronger the connection between two nodes, /> The larger the value. /> Represents the row sum/> of S (l) The corresponding elements of the column are multiplied and then added. That is to say, the greater the probability that two nodes are mapped to the same condensed node, the corresponding The larger the value. Therefore, by minimizing this constraint, two nodes with greater connection strength can be more easily mapped to the same node in the next layer. The second step is to add the entropy function regular term constraint, the formula is shown in Equation (15). Entropy represents the degree of chaos of the system distribution, and increases the uncertainty of the mapping distribution by reducing entropy. Avoid the situation where the mapping weight of a node to all nodes in the next layer is equal.

LLP=||A(l),S(l)S(l)T||||F#(14)L LP =||A (l) ,S (l) S (l)T |||| F #(14)

在完成图池化操作之后,将最终的浓缩节点特征送入可学习的全连接神经网络,对集群的意图进行分类。After completing the graph pooling operation, the final condensed node features are fed into a learnable fully-connected neural network to classify the intent of the cluster.

实施例Example

为进一步了解本发明的实施步骤,举实例如下:In order to further understand the implementation steps of the present invention, examples are as follows:

有无人机数量N=3的红机集群去进攻无人机数量M=5的蓝机集群,红机集群相对于蓝机B1物理特征如下所示There is a red aircraft cluster with N = 3 drones to attack a blue aircraft cluster with M = 5 drones. The physical characteristics of the red aircraft cluster relative to the blue aircraft B 1 are as follows

带入上述威胁系数计算公式可以得出每架红机对于蓝机B1的威胁系数:By incorporating the above threat coefficient calculation formula, we can get the threat coefficient of each red machine to the blue machine B 1 :

同理,可得每架红机对于其他蓝机的威胁系数。In the same way, the threat coefficient of each red machine to other blue machines can be obtained.

由于红机集群为小集群,集群内部无人机两两之间的距离较近,所以空间距离掩码都为1。Since the red machine cluster is a small cluster, the distance between two drones in the cluster is relatively close, so the spatial distance mask is all 1.

通过灰色关联矩阵的方法可以计算得到无人机Rj和无人机Rj′对蓝方的综合威胁系数Gjj′,选取得到的Gjj′的最大值来构建无人机集群的图数据结构,本例对应的图表示矩阵为:该矩阵表示无人机R1和无人机R2相连,无人机R1和无人机R3相连。The comprehensive threat coefficient G jj ′ of UAV R j and UAV R j′ to the blue side can be calculated through the gray correlation matrix method, and the maximum value of G jj′ obtained is selected to construct the graph data of the UAV cluster. Structure, the corresponding graph representation matrix in this example is: This matrix indicates that UAV R 1 and UAV R 2 are connected, and UAV R 1 and UAV R 3 are connected.

把得到的图表示矩阵和集群原始物理特征信息输入到图神经网络,利用图注意力网络计算红机集群内部两两之间的注意力系数,并根据计算得到的注意力系数得到深层特征信息。这一步由神经网络模型进行计算,不再展示手工计算过程,最终得到的无人机R1、R2、R3的深层特征信息分别如下所示:Input the obtained graph representation matrix and the original physical feature information of the cluster into the graph neural network, use the graph attention network to calculate the attention coefficient between pairs within the red machine cluster, and obtain the deep feature information based on the calculated attention coefficient. This step is calculated by the neural network model, and the manual calculation process is no longer shown. The final deep feature information of the UAVs R 1 , R 2 , and R 3 are as follows:

把无人机R1、R2、R3的深层特征信息输入到图池化网络中,通过参数更新不断最小化两个约束条件,得到红机集群的深层特征信息 Input the deep feature information of the UAVs R 1 , R 2 , and R 3 into the graph pooling network, and continuously minimize the two constraints through parameter updates to obtain the deep feature information of the red machine cluster.

输入到训练好的全连接网络中,对集群的意图进行分类,最终输出结果为进攻,与真实标签一致。Bundle Input into the trained fully connected network, classify the cluster's intention, and the final output result is offensive, which is consistent with the real label.

以上所述的仅为本发明的较佳实施例而已,本发明不仅仅局限于上述实施例,凡在本发明的精神实质和原理下所做的修改、替代、组合、裁剪等均应包含在本发明的保护范围之内。The above are only preferred embodiments of the present invention. The present invention is not limited to the above-mentioned embodiments. All modifications, substitutions, combinations, cuts, etc. made within the spirit and principles of the present invention shall be included in within the protection scope of the present invention.

Claims (10)

1. The intent recognition method for the unmanned aerial vehicle cluster in the countermeasure environment is characterized by comprising the following steps of:
firstly, calculating threat coefficients and a spatial distance mask of an unmanned aerial vehicle cluster according to physical characteristics of the unmanned aerial vehicle cluster at the same time, and constructing a graph structure data form of the unmanned aerial vehicle cluster according to the calculated threat coefficients and the spatial distance mask of the unmanned aerial vehicle cluster;
secondly, using the graph annotation force network to process the graph structure data obtained in the first step to obtain deep characteristic information of each unmanned aerial vehicle;
thirdly, deep characteristic information of the whole unmanned aerial vehicle cluster is obtained according to the deep characteristic information of each unmanned aerial vehicle obtained in the second step, the obtained deep characteristic information of the whole unmanned aerial vehicle cluster is input into a fully-connected network, and classification of unmanned aerial vehicle cluster intentions is achieved after supervised training and learning.
2. The method for identifying intention of unmanned aerial vehicle clusters in countermeasure environment according to claim 1, wherein:
in the first step, the physical characteristics include a spatial position, a course angle, a pitch angle, a speed, a radar opening state and a type of unmanned aerial vehicle of each unmanned aerial vehicle.
3. An intention recognition method for unmanned aerial vehicle-oriented clusters in a countering environment according to claim 1 or 2, characterized in that:
in the first step, a blue machine cluster B is arranged to share M unmanned aerial vehicles B i I=1, 2 … M, red cluster R has N unmanned aerial vehicles R in total j J=1, 2 … N unmanned plane R j To unmanned plane B i Threat coefficients of (a) include distance threat coefficientsAngle threat coefficientSpeed threat coefficient->And air combat efficiency threat coefficient thR ji R ij The formula is as follows:
in the formula ,thRji D ji ∈[0,1],D ji Is unmanned plane R j With unmanned plane B i A distance therebetween; lambda (lambda) 1 、λ 2 Is a constant parameter; />Is unmanned plane R j To unmanned plane B i Is a direction angle; k (K) aR The effective acting distance of the missile is the effective acting distance of the missile; t (T) rR The maximum acting distance of the missile is the maximum acting distance of the missile; />Is unmanned plane B i Is a speed of (2);is unmanned plane R j Is a speed of (2); c (C) i Is unmanned plane B i Air combat effectiveness index, C j Is unmanned plane R j Is an air combat effectiveness index.
4. A method for identifying intent of unmanned aerial vehicle-oriented clusters in a challenge environment according to claim 3, wherein:
unmanned aerial vehicle R with red square j Unmanned with red squareMachine R j′ The distance mask between (j' =1, 2 … N) is DI jj′ The formula is as follows:
DI jj′ =INT(d(R j′ ,R j )≤μ)
wherein mu is a constant and mu is [0.6,0.75]]The method comprises the steps of carrying out a first treatment on the surface of the INT represents the truth transformation function; d (R) j′ ,R j ) Representing unmanned plane R j Red square unmanned plane R j′ Is a relative distance of (2);
5. the method for identifying the intention of the unmanned aerial vehicle cluster in the countermeasure environment according to claim 4, wherein:
according to the distance threat coefficient, the angle threat coefficient, the speed threat coefficient, the air combat effectiveness threat coefficient and the distance mask, the graph structure data of the constructed unmanned aerial vehicle cluster are as follows:
wherein j "=1, 2 … N, th jj′ In order to utilize the distance threat coefficient, the angle threat coefficient, the speed threat coefficient and the air combat effectiveness threat coefficient, the unmanned plane R is calculated by a gray correlation matrix method j And unmanned plane R j′ Comprehensive threat coefficients to the blue party;
G jj′ ∈[0,1]
selecting the G jj′ Is set to G jz Maximum value, unmanned plane R j With unmanned aerial vehicle R z And connecting the unmanned aerial vehicle clusters to complete the construction of the graph data structure of the unmanned aerial vehicle clusters.
6. The method for identifying the intention of the unmanned aerial vehicle cluster in the countermeasure environment according to claim 5, wherein the method comprises the following steps:
in the second step, the method for obtaining the deep characteristic information comprises the following steps: computing unmanned aerial vehicle R j And unmanned plane R j′ Attention coefficients between the deep feature information and the deep feature information are obtained according to the attention coefficients obtained through calculation;
wherein the attention coefficient alpha jj′ The calculation method of (1) is as follows: connected unmanned aerial vehicle R using a leachable shared parameter W pair j And unmanned plane R j′ Performing feature enhancement, then splicing the enhanced features, and performing unmanned plane R j And unmanned plane R j′ And finally mapping the synthesized vector to a real number by using mapping alpha, and normalizing by softMax to obtain the connected unmanned aerial vehicle R j And unmanned plane R j′ Is given by:
wherein ,is unmanned plane R j A set of other unmanned aerial vehicles in a connected red cluster,/->e jj′ Mapping representing splice features;
e jj′ =α([Wh j ||Wh j′ ])
h j is unmanned plane R j Is a collection of physical features of (a);
unmanned plane R j Deep characteristic information of (a)The method comprises the following steps:
7. the method for identifying the intention of the unmanned aerial vehicle cluster in the countermeasure environment according to claim 6, wherein:
in the third step, deep characteristic information of the whole unmanned aerial vehicle cluster is obtained through image pooling operation according to the deep characteristic information of each unmanned aerial vehicle, specifically: let A (l) For the adjacency matrix corresponding to the first layer of the pooling layer, X (l) For the feature matrix of all nodes in the first layer, Z (l) For the features on each node extracted on the layer i graph, then the graph structure of the next layer is derived from the following equation:
A (l+1) ,X (l+1) =DiffPool(A (l) ,Z (l) )
wherein DiffPool represents solving the weight matrix S through a graph neural network (l) ,S (l) The numerical expression is used for indicating how much weight is distributed to the nodes of the next layer by the nodes of the previous layer, and the solving formula is as follows:
S (l) =softmax(GNN l,pool (A (l) ,X (l) ))
wherein ,GNNl,pool The method is characterized in that a graph neural network used for graph pooling is shown, and an iterative formula among layers in the graph neural network is as follows:
8. the method for identifying intent of unmanned aerial vehicle clusters in a challenge environment according to claim 7, wherein:
the optimization constraint function of the graph neural network comprises a connection regular term constraint and an entropy function regular term constraint, wherein the expression formula of the connection regular term constraint is as follows:
L LP =||A (l) ,S (l) S (l)T || F
the expression formula of the constraint of the regular term of the entropy function is as follows:
where n represents the number of nodes of the current pooling layer, p=1, 2 … n.
9. The method for identifying intention of unmanned aerial vehicle clusters in countermeasure environment according to claim 1, wherein:
the unmanned aerial vehicle cluster comprises a multi-rotor-wing decoy unmanned aerial vehicle, a investigation integrated unmanned aerial vehicle, a multi-rotor-wing relay unmanned aerial vehicle and a small investigation unmanned aerial vehicle.
10. The method for identifying intention of unmanned aerial vehicle clusters in countermeasure environment according to claim 1, wherein:
in the third step, classification of unmanned aerial vehicle cluster intent includes attack, impersonation, withdrawal, electronic interference and low-altitude attack.
CN202310635829.8A 2023-05-31 2023-05-31 Unmanned aerial vehicle cluster-oriented intention recognition method applied to countermeasure environment Pending CN116796284A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310635829.8A CN116796284A (en) 2023-05-31 2023-05-31 Unmanned aerial vehicle cluster-oriented intention recognition method applied to countermeasure environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310635829.8A CN116796284A (en) 2023-05-31 2023-05-31 Unmanned aerial vehicle cluster-oriented intention recognition method applied to countermeasure environment

Publications (1)

Publication Number Publication Date
CN116796284A true CN116796284A (en) 2023-09-22

Family

ID=88039044

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310635829.8A Pending CN116796284A (en) 2023-05-31 2023-05-31 Unmanned aerial vehicle cluster-oriented intention recognition method applied to countermeasure environment

Country Status (1)

Country Link
CN (1) CN116796284A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117668432A (en) * 2023-12-07 2024-03-08 北京航空航天大学 A method for calculating the threat index of clustered UAVs based on radar track information
CN118503790A (en) * 2024-07-09 2024-08-16 中国电子科技集团公司第十五研究所 Method, device, equipment and medium for identifying low-altitude unmanned aerial vehicle cluster countermeasure intention

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117668432A (en) * 2023-12-07 2024-03-08 北京航空航天大学 A method for calculating the threat index of clustered UAVs based on radar track information
CN118503790A (en) * 2024-07-09 2024-08-16 中国电子科技集团公司第十五研究所 Method, device, equipment and medium for identifying low-altitude unmanned aerial vehicle cluster countermeasure intention

Similar Documents

Publication Publication Date Title
US12072705B2 (en) Intelligent decision-making method and system for unmanned surface vehicle
Hu et al. Application of deep reinforcement learning in maneuver planning of beyond-visual-range air combat
Tai et al. A deep-network solution towards model-less obstacle avoidance
CN111240353B (en) Unmanned aerial vehicle collaborative air combat decision method based on genetic fuzzy tree
WO2021043193A1 (en) Neural network structure search method and image processing method and device
CN116796284A (en) Unmanned aerial vehicle cluster-oriented intention recognition method applied to countermeasure environment
CN112733251B (en) Collaborative flight path planning method for multiple unmanned aerial vehicles
CN112598046B (en) Target tactical intent recognition method in multi-machine cooperative air combat
Cheng et al. Research status of artificial neural network and its application assumption in aviation
CN102222240B (en) DSmT (Dezert-Smarandache Theory)-based image target multi-characteristic fusion recognition method
Siyuan et al. STABC-IR: An air target intention recognition method based on bidirectional gated recurrent unit and conditional random field with space-time attention mechanism
CN114266355A (en) Tactical intention identification method based on BilSTM-Attention
CN113625569B (en) Small unmanned aerial vehicle prevention and control decision method and system based on hybrid decision model
Qu et al. Intention recognition of aerial target based on deep learning
CN113741186B (en) Double-aircraft air combat decision-making method based on near-end strategy optimization
CN115130357A (en) GRU-based air target combat intention prediction system and method
CN114818853B (en) Intent recognition method based on bidirectional gated recurrent unit and conditional random field
CN114138022B (en) A Distributed Formation Control Method for UAV Swarms Based on Elite Pigeon Group Intelligence
CN118171742B (en) Knowledge-data driven air combat target intention reasoning method and system based on residual estimation
CN112926739A (en) Network countermeasure effectiveness evaluation method based on neural network model
Wang et al. Learning embedding features based on multisense-scaled attention architecture to improve the predictive performance of air combat intention recognition
CN113505538B (en) Unmanned aerial vehicle autonomous combat system based on computer generated force
CN115238832B (en) CNN-LSTM-based air formation target intention identification method and system
Pradhan et al. Artificial intelligence empowered models for UAV communications
CN116484227B (en) Neural network modeling method for generating tail end maneuver avoidance index of aircraft bullet countermeasure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination