Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (734)

Search Parameters:
Keywords = attackers’ knowledge

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 782 KiB  
Article
Mathematical Proposal for Securing Split Learning Using Homomorphic Encryption and Zero-Knowledge Proofs
by Agon Kokaj and Elissa Mollakuqe
Appl. Sci. 2025, 15(6), 2913; https://doi.org/10.3390/app15062913 - 7 Mar 2025
Abstract
This work presents a mathematical solution to data privacy and integrity issues in Split Learning which uses Homomorphic Encryption (HE) and Zero-Knowledge Proofs (ZKP). It allows calculations to be conducted on encrypted data, keeping the data private, while ZKP ensures the correctness of [...] Read more.
This work presents a mathematical solution to data privacy and integrity issues in Split Learning which uses Homomorphic Encryption (HE) and Zero-Knowledge Proofs (ZKP). It allows calculations to be conducted on encrypted data, keeping the data private, while ZKP ensures the correctness of these calculations without revealing the underlying data. Our proposed system, HavenSL, combines HE and ZKP to provide strong protection against attacks. It uses Discrete Cosine Transform (DCT) to analyze model updates in the frequency domain to detect unusual changes in parameters. HavenSL also has a rollback feature that brings the system back to a verified state if harmful changes are detected. Experiments on CIFAR-10, MNIST, and Fashion-MNIST datasets show that using Homomorphic Encryption and Zero-Knowledge Proofs during training is feasible and accuracy is maintained. This mathematical-based approach shows how crypto-graphic can protect decentralized learning systems. It also proves the practical use of HE and ZKP in secure, privacy-aware collaborative AI. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

Figure 1
<p>Data are encrypted on the client and processed by the server.</p>
Full article ">Figure 2
<p>Client-Server Communication in Split Learning with HE and ZKP.</p>
Full article ">
28 pages, 368 KiB  
Article
A CIA Triad-Based Taxonomy of Prompt Attacks on Large Language Models
by Nicholas Jones, Md Whaiduzzaman, Tony Jan, Amr Adel, Ammar Alazab and Afnan Alkreisat
Future Internet 2025, 17(3), 113; https://doi.org/10.3390/fi17030113 - 3 Mar 2025
Viewed by 416
Abstract
The rapid proliferation of Large Language Models (LLMs) across industries such as healthcare, finance, and legal services has revolutionized modern applications. However, their increasing adoption exposes critical vulnerabilities, particularly through adversarial prompt attacks that compromise LLM security. These prompt-based attacks exploit weaknesses in [...] Read more.
The rapid proliferation of Large Language Models (LLMs) across industries such as healthcare, finance, and legal services has revolutionized modern applications. However, their increasing adoption exposes critical vulnerabilities, particularly through adversarial prompt attacks that compromise LLM security. These prompt-based attacks exploit weaknesses in LLMs to manipulate outputs, leading to breaches of confidentiality, corruption of integrity, and disruption of availability. Despite their significance, existing research lacks a comprehensive framework to systematically understand and mitigate these threats. This paper addresses this gap by introducing a taxonomy of prompt attacks based on the Confidentiality, Integrity, and Availability (CIA) triad, an important cornerstone of cybersecurity. This structured taxonomy lays the foundation for a unique framework of prompt security engineering, which is essential for identifying risks, understanding their mechanisms, and devising targeted security protocols. By bridging this critical knowledge gap, the present study provides actionable insights that can enhance the resilience of LLM to ensure their secure deployment in high-stakes and real-world environments. Full article
(This article belongs to the Special Issue Generative Artificial Intelligence (AI) for Cybersecurity)
Show Figures

Figure 1

Figure 1
<p>Prompt injection attack against an LLM-integrated application.</p>
Full article ">
28 pages, 11119 KiB  
Article
Tactical Coordination-Based Decision Making for Unmanned Combat Aerial Vehicles Maneuvering in Within-Visual-Range Air Combat
by Yidong Liu, Dali Ding, Mulai Tan, Yuequn Luo, Ning Li and Huan Zhou
Aerospace 2025, 12(3), 193; https://doi.org/10.3390/aerospace12030193 - 27 Feb 2025
Viewed by 176
Abstract
Targeting the autonomous decision-making problem of unmanned combat aerial vehicles (UCAVs) in a two-versus-one (2v1) within-visual-range (WVR) air combat scenario, this paper proposes a maneuver decision-making method based on tactical coordination. First, a coordinated situation assessment model is designed, which subdivides the air [...] Read more.
Targeting the autonomous decision-making problem of unmanned combat aerial vehicles (UCAVs) in a two-versus-one (2v1) within-visual-range (WVR) air combat scenario, this paper proposes a maneuver decision-making method based on tactical coordination. First, a coordinated situation assessment model is designed, which subdivides the air combat situation into optimization-driven and tactical coordinated situations. The former combines missile attack zone calculation and trajectory prediction to optimize the control quantity of a single aircraft, while the latter uses fuzzy logic to analyze the overall situation of the three aircraft to drive tactical selection. Second, a decision-making model based on a hierarchical expert system is constructed, establishing a hierarchical decision-making framework with a UCAV-coordinated combat knowledge base. The coordinated situation assessment results are used to match corresponding tactics and maneuver control quantities. Finally, an improved particle swarm optimization algorithm (I-PSO) is proposed, which enhances the optimization ability and real-time performance through the design of local social factor iterative components and adaptive adjustment of inertia weights. Air combat simulations in four different scenarios verify the effectiveness and superiority of the proposed decision-making method. The results show that the method can achieve autonomous decision making in dynamic air combat. Compared with decision-making methods based on optimization algorithms and differential games, the win rate is increased by about 17% and 18%, respectively, and the single-step decision-making time is less than 0.02 s, demonstrating high real-time performance and win rate. This research provides new ideas and methods for the autonomous decision making of UCAVs in complex air combat scenarios. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

Figure 1
<p>Horizontal air combat situation.</p>
Full article ">Figure 2
<p>Block diagram of the UCAV air combat maneuver decision-making model.</p>
Full article ">Figure 3
<p>Schematic diagram of the air combat situation between the enemy and UCAV.</p>
Full article ">Figure 4
<p>S Images of the variation in <math display="inline"><semantics> <mrow> <msub> <mi>η</mi> <mi mathvariant="normal">A</mi> </msub> </mrow> </semantics></math> with <math display="inline"><semantics> <mrow> <msub> <mi>Q</mi> <mi>u</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>Q</mi> <mi mathvariant="normal">e</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>(<b>a</b>) Curve of variation of <span class="html-italic">η<sub>D</sub></span> with <math display="inline"><semantics> <mrow> <msub> <mi>D</mi> <mrow> <mi>ue</mi> </mrow> </msub> </mrow> </semantics></math> when <math display="inline"><semantics> <mrow> <msub> <mi>Q</mi> <mi mathvariant="normal">e</mi> </msub> <mo>≥</mo> <msup> <mrow> <mn>120</mn> </mrow> <mo>∘</mo> </msup> </mrow> </semantics></math>; (<b>b</b>) curve of variation of <math display="inline"><semantics> <mrow> <msub> <mi>η</mi> <mi>D</mi> </msub> </mrow> </semantics></math> with <math display="inline"><semantics> <mrow> <msub> <mi>D</mi> <mrow> <mi>ue</mi> </mrow> </msub> </mrow> </semantics></math> when <math display="inline"><semantics> <mrow> <msub> <mi>Q</mi> <mi mathvariant="normal">e</mi> </msub> <mo>&lt;</mo> <msup> <mrow> <mn>120</mn> </mrow> <mo>∘</mo> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Images of the variation of <span class="html-italic">η<sub>E</sub></span> with <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi mathvariant="normal">u</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi>e</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Demonstration of horizontal defense splitting and combining tactics.</p>
Full article ">Figure 8
<p>Convergence curves of the three algorithms: (<b>a</b>) test function 1; (<b>b</b>) test function 2; (<b>c</b>) test function 3.</p>
Full article ">Figure 9
<p>Simulation results for head-on situation; (<b>a</b>) trajectories of the three aircrafts; (<b>b</b>) types of tactics used by UCAVs; (<b>c</b>) change in situation value for the three aircrafts; (<b>d</b>) targeting change in enemy aircraft.</p>
Full article ">Figure 9 Cont.
<p>Simulation results for head-on situation; (<b>a</b>) trajectories of the three aircrafts; (<b>b</b>) types of tactics used by UCAVs; (<b>c</b>) change in situation value for the three aircrafts; (<b>d</b>) targeting change in enemy aircraft.</p>
Full article ">Figure 10
<p>Simulation results for trailing situation: (<b>a</b>) trajectories of the three aircrafts; (<b>b</b>) types of tactics used by the UCAVs; (<b>c</b>) change in situation value for the three aircrafts; and (<b>d</b>) targeting change in enemy aircraft.</p>
Full article ">Figure 10 Cont.
<p>Simulation results for trailing situation: (<b>a</b>) trajectories of the three aircrafts; (<b>b</b>) types of tactics used by the UCAVs; (<b>c</b>) change in situation value for the three aircrafts; and (<b>d</b>) targeting change in enemy aircraft.</p>
Full article ">Figure 11
<p>Simulation results for the trailed situation: (<b>a</b>) trajectories of the three aircraft; (<b>b</b>) types of tactics used by UCAVs; (<b>c</b>) change in situation values for the three aircrafts; (<b>d</b>) targeting change in enemy aircraft.</p>
Full article ">Figure 11 Cont.
<p>Simulation results for the trailed situation: (<b>a</b>) trajectories of the three aircraft; (<b>b</b>) types of tactics used by UCAVs; (<b>c</b>) change in situation values for the three aircrafts; (<b>d</b>) targeting change in enemy aircraft.</p>
Full article ">Figure 12
<p>Simulation results for parallel situation; (<b>a</b>) trajectories of the three aircraft; (<b>b</b>) types of tactics used by UCAVs; (<b>c</b>) change in situation value for three aircraft; (<b>d</b>) targeting change of enemy aircraft.</p>
Full article ">Figure 12 Cont.
<p>Simulation results for parallel situation; (<b>a</b>) trajectories of the three aircraft; (<b>b</b>) types of tactics used by UCAVs; (<b>c</b>) change in situation value for three aircraft; (<b>d</b>) targeting change of enemy aircraft.</p>
Full article ">Figure 13
<p>The win rate comparison with the method based on optimization algorithms (bigger is better): (<b>a</b>) one discrete point in the subfigure corresponds to the average win rate of 10 MC trials; (<b>b</b>) depicts the distribution of win rate.</p>
Full article ">Figure 14
<p>The win rate comparison with the method based on differential game algorithms (bigger is better): (<b>a</b>) one discrete point in the subfigure corresponds to the average win rate of 10 MC trials; (<b>b</b>) depicts the distribution of the win rate.</p>
Full article ">Figure 15
<p>The computational performance comparison with the method based on differential game algorithms (smaller is better): (<b>a</b>) one discrete point in the subfigure corresponds to the average CPU time of 10 MC trials; (<b>b</b>) depicts the distribution of CPU time.</p>
Full article ">
19 pages, 1959 KiB  
Article
Leveraging Federated Learning for Malware Classification: A Heterogeneous Integration Approach
by Kongyang Chen, Wangjun Zhang, Zhangmao Liu and Bing Mi
Electronics 2025, 14(5), 915; https://doi.org/10.3390/electronics14050915 - 25 Feb 2025
Viewed by 286
Abstract
The increasing complexity and frequency of malware attacks pose significant challenges to cybersecurity, as traditional methods struggle to keep pace with the evolving threat landscape. Current malware classification techniques often fail to account for the heterogeneity of malware data and models across different [...] Read more.
The increasing complexity and frequency of malware attacks pose significant challenges to cybersecurity, as traditional methods struggle to keep pace with the evolving threat landscape. Current malware classification techniques often fail to account for the heterogeneity of malware data and models across different clients, limiting their effectiveness. In this chapter, we propose a distributed model enhancement-based malware classification method that leverages federated learning to address these limitations. Our approach employs generative adversarial networks to generate synthetic malware data, transforming non-independent datasets into approximately independent ones to mitigate data heterogeneity. Additionally, we utilize knowledge distillation to facilitate the transfer of knowledge between client-specific models and a global classification model, promoting effective collaboration among diverse systems. Inspired by active defense theory, our method identifies suboptimal models during training and replaces them on a central server, ensuring all clients operate with optimal classification capabilities. We conducted extensive experimentation on the Malimg dataset and the Microsoft Malware Classification Challenge (MMCC) dataset. In scenarios characterized by both model heterogeneity and data heterogeneity, our proposed method demonstrated its effectiveness by improving the global malware classification model’s accuracy to 96.80%. Overall, our research presents a robust framework for improving malware classification while maintaining data privacy across distributed environments, highlighting its potential to strengthen cybersecurity defenses against increasingly sophisticated malware threats. Full article
(This article belongs to the Special Issue AI-Based Solutions for Cybersecurity)
Show Figures

Figure 1

Figure 1
<p>System framework.</p>
Full article ">Figure 2
<p>A GAN for malware data heterogeneity.</p>
Full article ">Figure 3
<p>Model enhancement for malware classification.</p>
Full article ">Figure 4
<p>MMCC dataset.</p>
Full article ">Figure 5
<p>Original malware image (<b>left</b>) and generated malware image (<b>right</b>).</p>
Full article ">Figure 6
<p>Accuracy of the global model on the MMCC dataset under client models with similar accuracy.</p>
Full article ">Figure 7
<p>Accuracy of the global model on the MMCC dataset under client models with significant differences in accuracy.</p>
Full article ">Figure 8
<p>Accuracy of the global model on the Malimg dataset under client models with similar accuracy.</p>
Full article ">Figure 9
<p>Accuracy of the global model on the Malimg dataset under client models with significant differences in accuracy.</p>
Full article ">
40 pages, 5125 KiB  
Article
Challenging Scientific Categorizations Through Dispute Learning
by Renaud Fabre, Patrice Bellot and Daniel Egret
Appl. Sci. 2025, 15(4), 2241; https://doi.org/10.3390/app15042241 - 19 Feb 2025
Viewed by 474
Abstract
Scientific dispute and scholarly debate have traditionally served as mechanisms for arbitrating between competing scientific categorizations. However, current AI technologies lack both the ethical framework and technical capabilities to handle the adversarial reasoning inherent in scientific discourse effectively. This creates a ‘categorization conundrum’ [...] Read more.
Scientific dispute and scholarly debate have traditionally served as mechanisms for arbitrating between competing scientific categorizations. However, current AI technologies lack both the ethical framework and technical capabilities to handle the adversarial reasoning inherent in scientific discourse effectively. This creates a ‘categorization conundrum’ where new knowledge emerges from opaque black-box systems while simultaneously introducing unresolved vulnerabilities to errors and adversarial attacks. Our research addresses this challenge by examining how to preserve and enhance human dispute’s vital role in the creation, development, and resolution of knowledge categorization, supported by traceable AI assistance. Building on our previous work, which introduced GRAPHYP—a multiverse hypergraph representation of adversarial opinion profiles derived from multimodal web-based documentary traces—we present three key findings. First, we demonstrate that standardizing concepts and methods through ‘Dispute Learning’ not only expands the range of adversarial pathways in scientific categorization but also enables the identification of GRAPHYP model extensions. These extensions accommodate additional forms of human reasoning in adversarial contexts, guided by novel philosophical and methodological frameworks. Second, GRAPHYP’s support for human reasoning through graph-based visualization provides access to a broad spectrum of practical applications in decidable challenging categorizations, which we illustrate through selected case studies. Third, we introduce a hybrid analytical approach combining probabilistic and possibilistic methods, applicable to diverse classical research data types. We identify analytical by-products of GRAPHYP and examine their epistemological implications. Our discussion of standardized representations of documented adversarial uses highlights the enhanced value that structured dispute brings to elicit differential categorizations in the scientific discourse. Full article
(This article belongs to the Special Issue New Trends in Natural Language Processing)
Show Figures

Figure 1

Figure 1
<p>Motivation: Human-led arbitration of competing categories with AI companionship.</p>
Full article ">Figure 2
<p>Background: modeling adversarial categorization pathways.</p>
Full article ">Figure 3
<p>Possibilistic evaluation. See explanations in the text.</p>
Full article ">Figure 4
<p>Corresponding probabilistic measures. See explanations in the text.</p>
Full article ">Figure 5
<p>Operating rules in GRAPHYP’s fractal graph of category scaling.</p>
Full article ">Figure 6
<p>Path Genealogy in Time and Space of Vital Nodes.</p>
Full article ">Figure 7
<p>Exploring conflicting conditions and detecting disputable knowledge.</p>
Full article ">Figure 8
<p>Standardizing a dispute learning approach in GRAPHYP.</p>
Full article ">Figure A1
<p>The figure outlines the primary steps of GRAPHYP’s workflow. See Fabre et al. [<a href="#B15-applsci-15-02241" class="html-bibr">15</a>] for details.</p>
Full article ">Figure A2
<p>Analysis of Strategic Documentary Choices in GRAPHYP: Alpha and Beta Parameter Trends Across Session Blocks. See definitions in GRAPHYP Notebook: <a href="https://github.com/pbellot/graphyp?tab=readme-ov-file" target="_blank">https://github.com/pbellot/graphyp?tab=readme-ov-file</a>, accessed on 5 December 2024.</p>
Full article ">Figure A2 Cont.
<p>Analysis of Strategic Documentary Choices in GRAPHYP: Alpha and Beta Parameter Trends Across Session Blocks. See definitions in GRAPHYP Notebook: <a href="https://github.com/pbellot/graphyp?tab=readme-ov-file" target="_blank">https://github.com/pbellot/graphyp?tab=readme-ov-file</a>, accessed on 5 December 2024.</p>
Full article ">
26 pages, 3232 KiB  
Article
Controllable Blind AC FDIA via Physics-Informed Extrapolative AVAE
by Siliang Zhao, Wuman Luo, Qin Shu and Fangwei Xu
Sensors 2025, 25(3), 943; https://doi.org/10.3390/s25030943 - 5 Feb 2025
Viewed by 393
Abstract
False data injection attacks (FDIAs) targeting AC state estimation pose significant challenges, especially when only power measurements are available, and voltage measurements are absent. Current machine learning-based approaches struggle to effectively control state estimation errors and are confined to the data distribution of [...] Read more.
False data injection attacks (FDIAs) targeting AC state estimation pose significant challenges, especially when only power measurements are available, and voltage measurements are absent. Current machine learning-based approaches struggle to effectively control state estimation errors and are confined to the data distribution of training sets. To address these limitations, we propose the physics-informed extrapolative adversarial variational autoencoder (PI-ExAVAE) for generating controllable and stealthy false data injections. By incorporating physically consistent priors derived from the AC power flow equations, which enforce both the physical laws of power systems and the stealth requirements to evade bad data detection mechanisms, the model learns to generate attack vectors that are physically plausible and stealthy while inducing significant and controllable deviations in state estimation. Experimental results on IEEE-14 and IEEE-118 systems show that the model achieves a 90% success rate in bypassing detection tests for most attack configurations and outperforms methods like SAGAN by generating smoother, more realistic deviations. Furthermore, the use of physical priors enables the model to extrapolate beyond the training data distribution, effectively targeting unseen operational scenarios. This highlights the importance of integrating physics knowledge into data-driven approaches to enhance adaptability and robustness against evolving detection mechanisms. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>The structure of physics-informed extrapolative adversarial VAE.</p>
Full article ">Figure 2
<p>The difference between attack methods based on GAN [<a href="#B20-sensors-25-00943" class="html-bibr">20</a>] and our PI-ExAVAE. The dotted line indicates the vector generation path after training is completed.</p>
Full article ">Figure 3
<p>Region index of NYISO.</p>
Full article ">Figure 4
<p>The residuals of the PI-ExAVAE under different <math display="inline"><semantics> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> </mrow> </semantics></math> are compared with those of the raw measurements (<span class="html-italic">y</span>), LSTMAE-GAN, SAGAN, and AE-WGAN methods.</p>
Full article ">Figure 5
<p>The detection rate of the PI-ExAVAE under different <math display="inline"><semantics> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> </mrow> </semantics></math> are compared with those of the raw measurements (<math display="inline"><semantics> <mi mathvariant="bold-italic">y</mi> </semantics></math>), LSTMAE-GAN, SAGAN, and AE-WGAN. A “measurement group” includes all measurements collected at a single time step, including power measurements from multiple nodes. The light blue base of each bar shows the proportion of time steps with at least one abnormal measurement, while the orange stack represents the total abnormal measurements across all time steps divided by the total number of time steps. Red dots indicate detection rates of 0%.</p>
Full article ">Figure 6
<p>Deviations of estimated and attacked measurements (with different <math display="inline"><semantics> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> </mrow> </semantics></math>) compared with the original.</p>
Full article ">Figure 7
<p>Voltage phasors of 13 buses (except for reference bus) before and after the attack with different methods and different <math display="inline"><semantics> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> </mrow> </semantics></math>. The left figure shows the voltage magnitude, while the right figure shows the voltage phase angle. PI-ExAVAE demonstrates precise control over state estimation deviations, allowing for gradual adjustments in magnitude and phase angle as <math display="inline"><semantics> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> </mrow> </semantics></math> varies.</p>
Full article ">Figure 8
<p>Comparison of voltage magnitude (<b>left</b>) and voltage angle (<b>right</b>) between training data and the physics-guided model for IEEE-14 buses.</p>
Full article ">Figure 9
<p>Detection rates under different thresholds of the normalized maximum residual <math display="inline"><semantics> <mi mathvariant="bold-italic">r</mi> </semantics></math> for various <math display="inline"><semantics> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> </mrow> </semantics></math> values in the proposed PI-ExAVAE model compared to original measurements, SAGAN, LSTMAE-GAN, and AE-WGAN.</p>
Full article ">Figure 10
<p>Voltage phasors of 117 buses before and after the attack. The left figure shows the voltage magnitude, while the right figure shows the voltage phase angle.</p>
Full article ">
21 pages, 2831 KiB  
Article
Detecting Malicious .NET Executables Using Extracted Methods Names
by Hamdan Thabit, Rami Ahmad, Ahmad Abdullah, Abedallah Zaid Abualkishik and Ali A. Alwan
AI 2025, 6(2), 20; https://doi.org/10.3390/ai6020020 - 21 Jan 2025
Viewed by 792
Abstract
The .NET framework is widely used for software development, making it a target for a significant number of malware attacks by developing malicious executables. Previous studies on malware detection often relied on developing generic detection methods for Windows malware that were not tailored [...] Read more.
The .NET framework is widely used for software development, making it a target for a significant number of malware attacks by developing malicious executables. Previous studies on malware detection often relied on developing generic detection methods for Windows malware that were not tailored to the unique characteristics of .NET executables. As a result, there remains a significant knowledge gap regarding the development of effective detection methods tailored to .NET malware. This work introduces a novel framework for detecting malicious .NET executables using statically extracted method names. To address the lack of datasets focused exclusively on .NET malware, a new dataset consisting of both malicious and benign .NET executable features was created. Our approach involves decompiling .NET executables, parsing the resulting code, and extracting standard .NET method names. Subsequently, feature selection techniques were applied to filter out less relevant method names. The performance of six machine learning models—XGBoost, random forest, K-nearest neighbor (KNN), support vector machine (SVM), logistic regression, and naïve Bayes—was compared. The results indicate that XGBoost outperforms the other models, achieving an accuracy of 96.16% and an F1-score of 96.15%. The experimental results show that standard .NET method names are reliable features for detecting .NET malware. Full article
(This article belongs to the Section AI Systems: Theory and Applications)
Show Figures

Figure 1

Figure 1
<p>.NET framework.</p>
Full article ">Figure 2
<p>The proposed .NET malware detection framework.</p>
Full article ">Figure 3
<p>Sample of .NET extracted features.</p>
Full article ">Figure 4
<p>Venn diagram.</p>
Full article ">Figure 5
<p>ROC curves comparison of all tested classifiers.</p>
Full article ">Figure 6
<p>Confusion matrix of all models.</p>
Full article ">Figure 7
<p>Global feature importance.</p>
Full article ">Figure 8
<p>SHAP summary plot for the top 10 features.</p>
Full article ">
23 pages, 514 KiB  
Case Report
Experiencing Traumatic Violence: An Interpretative Phenomenological Analysis of One Man’s Lived Experience of a Violent Attack Involving a Knife
by Zoe Partington, R. Stephen Walsh and Danielle Labhardt
Behav. Sci. 2025, 15(1), 89; https://doi.org/10.3390/bs15010089 - 20 Jan 2025
Viewed by 774
Abstract
A review of the violent knife crime literature suggests that the experiential perspective is one which has not been addressed in academic study. The research presented hereafter aims to address this literary gap and generate transferable knowledge relevant to the lived experience of [...] Read more.
A review of the violent knife crime literature suggests that the experiential perspective is one which has not been addressed in academic study. The research presented hereafter aims to address this literary gap and generate transferable knowledge relevant to the lived experience of violent knife crime. The experiential study of the single case within psychological research involves detailed examination of a particular event. Participant ‘J’ is the survivor of an extremely violent attack, involving the use of a knife, in his own home. J’s experience was analysed using Interpretative Phenomenological Analysis with reference to elements of the lifeworld: temporality, spatiality, intersubjectivity, and embodiment. Three themes were identified: 1. switching from past to present tense when relaying traumatic experience; 2. The presence of redemption sequences; and 3. making sense as a temporal process, which included an additional two subthemes—‘The long journey’ and ‘Seeking belongingness’. This case emphasises that the traumatic event is conceptualised as one part of a longer journey towards recovery, and that recovery itself is central to the experience of violent knife crime. Finally, the need to understand recovery as temporal process highlights the need to provide victims with appropriate support in order to avoid negative outcomes. Full article
Show Figures

Figure 1

Figure 1
<p>Diagram depicting identified themes and subthemes within J’s interview transcript.</p>
Full article ">
19 pages, 421 KiB  
Article
Task-Oriented Adversarial Attacks for Aspect-Based Sentiment Analysis Models
by Monserrat Vázquez-Hernández, Ignacio Algredo-Badillo, Luis Villaseñor-Pineda, Mariana Lobato-Báez, Juan Carlos Lopez-Pimentel and Luis Alberto Morales-Rosales
Appl. Sci. 2025, 15(2), 855; https://doi.org/10.3390/app15020855 - 16 Jan 2025
Viewed by 721
Abstract
Adversarial attacks deliberately modify deep learning inputs, mislead models, and cause incorrect results. Previous adversarial attacks on sentiment analysis models have demonstrated success in misleading these models. However, most existing attacks in sentiment analysis have applied a generalized approach to input modifications, without [...] Read more.
Adversarial attacks deliberately modify deep learning inputs, mislead models, and cause incorrect results. Previous adversarial attacks on sentiment analysis models have demonstrated success in misleading these models. However, most existing attacks in sentiment analysis have applied a generalized approach to input modifications, without considering the characteristics and objectives of the different analysis levels. Specifically, for aspect-based sentiment analysis, there is a lack of attack methods that modify inputs in accordance with the evaluated aspects. Consequently, unnecessary modifications are made, compromising the input semantics, making the changes more detectable, and avoiding the identification of new vulnerabilities. In previous work, we proposed a model to generate adversarial examples in particular for aspect-based sentiment analysis. In this paper, we assess the effectiveness of our adversarial example model in negatively impacting aspect-based model results while maintaining high levels of semantic inputs. To conduct this evaluation, we propose diverse adversarial attacks across different dataset domains, target architectures, and consider distinct levels of victim model knowledge, thus obtaining a comprehensive evaluation. The obtained results demonstrate that our approach outperforms existing attack methods in terms of accuracy reduction and semantic similarity, achieving a 65.30% reduction in model accuracy with a low perturbation ratio of 7.79%. These findings highlight the importance of considering task-specific characteristics when designing adversarial examples, as even simple modifications to elements that support task classification can successfully mislead models. Full article
(This article belongs to the Special Issue Natural Language Processing: Novel Methods and Applications)
Show Figures

Figure 1

Figure 1
<p>Aspect-based adversarial examples’ process generation. Based on the aspect-based adversarial example (ABAA) model, the modification process identifies the term to be altered based on the evaluated aspect. Each potential modification is controlled to ensure its imperceptibility and the preservation of the input’s semantic.</p>
Full article ">Figure 2
<p>White-box aspect-based adversarial attack overview.</p>
Full article ">Figure 3
<p>Gray-box aspect-based adversarial attack overview.</p>
Full article ">Figure 4
<p>Black-box aspect-based adversarial attack overview.</p>
Full article ">Figure 5
<p>Comparison of the results from the ABAA attack, baseline 1, and state-of-the-art methods on the Laptop dataset. The comparison includes the perturbation ratio, semantic similarity, and accuracy reduction achieved by the different attacks.</p>
Full article ">
25 pages, 1993 KiB  
Article
Hacking Exposed: Leveraging Google Dorks, Shodan, and Censys for Cyber Attacks and the Defense Against Them
by Abdullah Alabdulatif and Navod Neranjan Thilakarathne
Computers 2025, 14(1), 24; https://doi.org/10.3390/computers14010024 - 15 Jan 2025
Viewed by 1352
Abstract
In recent years, cyberattacks have increased in sophistication, using a variety of tools to exploit vulnerabilities across the global digital landscapes. Among the most commonly used tools at an attacker’s disposal are Google dorks, Shodan, and Censys, which offer unprecedented access to exposed [...] Read more.
In recent years, cyberattacks have increased in sophistication, using a variety of tools to exploit vulnerabilities across the global digital landscapes. Among the most commonly used tools at an attacker’s disposal are Google dorks, Shodan, and Censys, which offer unprecedented access to exposed systems, devices, and sensitive data on the World Wide Web. While these tools can be leveraged by professional hackers, they have also empowered “Script Kiddies”, who are low-skill, inexperienced attackers who use readily available exploits and scanning tools without deep technical knowledge. Consequently, cyberattacks targeting critical infrastructure are growing at a rapid rate, driven by the ease with which these solutions can be operated with minimal expertise. This paper explores the potential for cyberattacks enabled by these tools, presenting use cases where these platforms have been used for both offensive and defensive purposes. By examining notable incidents and analyzing potential threats, we outline proactive measures to protect against these emerging risks. In this study, we delve into how these tools have been used offensively by attackers and how they serve defensive functions within cybersecurity. Additionally, we also introduce an automated all-in-one tool designed to consolidate the functionalities of Google dorks, Shodan, and Censys, offering a streamlined solution for vulnerability detection and analysis. Lastly, we propose proactive defense strategies to mitigate exploitation risks associated with such tools, aiming to enhance the resilience of critical digital infrastructure against evolving cyber threats. Full article
(This article belongs to the Special Issue Multimedia Data and Network Security)
Show Figures

Figure 1

Figure 1
<p>Distribution of detected worldwide cyberattacks by type in 2022.</p>
Full article ">Figure 2
<p>Key steps involved in executing a cyberattack.</p>
Full article ">Figure 3
<p>Example of a Google dork query.</p>
Full article ">Figure 4
<p>Example of a Shodan query that returns a list of IP addresses for devices running Apache within the specified country: United States.</p>
Full article ">Figure 5
<p>Example of a Shodan query with filtered options.</p>
Full article ">Figure 6
<p>Example of a Censys search to find all devices with a software product with the word “Windows” in it.</p>
Full article ">Figure 7
<p>Automated Cyber Threat Hunting Tool V1.0.</p>
Full article ">
20 pages, 3171 KiB  
Article
Design and Construction of the Real Felipe Fortress of Callao: Analysis of the Military Treatise and Layout Using Photogrammetry and GIS
by Diego Javier Celis-Estrada, Pablo Rodriguez-Navarro and Teresa Gil-Piqueras
Heritage 2025, 8(1), 23; https://doi.org/10.3390/heritage8010023 - 10 Jan 2025
Viewed by 779
Abstract
Peru constituted the most important Viceroyalty of the Spanish Empire in South America, with the Port of Callao controlling the South Pacific trade routes. Although it was safe in its infancy, Callao suffered coastal attacks leading to its fortification. However, on 28 October [...] Read more.
Peru constituted the most important Viceroyalty of the Spanish Empire in South America, with the Port of Callao controlling the South Pacific trade routes. Although it was safe in its infancy, Callao suffered coastal attacks leading to its fortification. However, on 28 October 1746, an earthquake and tidal wave devastated the port, leading to its relocation and the construction of the Real Felipe Fortress of Callao, the South Pacific’s most significant fortification. The fortress was based on 18th century military conceptions adapted to the specific conditions of the coastal lands of the Peruvian Viceroyalty, such as the lack of stone, the use of adobe, and the frequent earthquakes. This research sought to identify the architectural theories influencing its design, the adaptations necessary for its coastal location, and the underlying mathematical and military concepts. Photogrammetry based techniques and a geographic information system (GIS) were used for georeferencing historical planimetry, along with the analysis of historical documents. This allowed us to reconstruct the original design and make evident how European ideas were adjusted to the particularities of the American territory, thus contributing to the improvement of knowledge about Spanish military architecture in America. Full article
(This article belongs to the Special Issue 3D Reconstruction of Cultural Heritage and 3D Assets Utilisation)
Show Figures

Figure 1

Figure 1
<p>Location map: (<b>a</b>) Location with respect to Peru; (<b>b</b>) Location with respect to the Constitutional Province of Callao; (<b>c</b>) Location of the Real Felipe Fortress of Callao (18L 266,049.61 m E, 8,665,584.90 m S).</p>
Full article ">Figure 2
<p>Graphic documentation selected from <a href="#heritage-08-00023-t001" class="html-table">Table 1</a> of the Spanish Archives Portal (PARES): (<b>a</b>) “Representation of the fires for the defense of the port of Callao in one location or another of the fortress”; (<b>b</b>) “Location of the fortification with respect to the old Callao enclosure”; (<b>c</b>) “The prison of Callao”; (<b>d</b>) Plan of the Callao fortification project; (<b>e</b>) Plan of the Callao fortification project; (<b>f</b>) Plan of the Callao fortification project.</p>
Full article ">Figure 3
<p>The original layout of the regular pentagon and the new design of the irregular pentagon, according to the report by José Amich, selected from <a href="#heritage-08-00023-t002" class="html-table">Table 2</a> of the Spanish Archives Portal (PARES).</p>
Full article ">Figure 4
<p>Three-dimensional (3D) model of the fortress generated from photographic data using PIX4Dmapper software <a href="https://www.pix4d.com/product/pix4dmapper-photogrammetry-software/" target="_blank">https://www.pix4d.com/product/pix4dmapper-photogrammetry-software/</a> (accessed on 7 January 2025).</p>
Full article ">Figure 5
<p>Layout of the Real Felipe Fortress of Callao according to the design report by José Amich: (a) Original project; (b) Readjusted project; (c) Strokes; (d) Parallel lines.</p>
Full article ">Figure 6
<p>Layout of the fortress on the orthomosaic of the Real Felipe Fortress of Callao: (a) Original project; (b) Readjusted project; (c) Strokes.</p>
Full article ">Figure 7
<p>Section 1-1: (a) Representation of the wall according to the original design projected by José Amich; (b) Section corresponding to the wall in its current state.</p>
Full article ">
26 pages, 4448 KiB  
Article
Leveraging Neural Trojan Side-Channels for Output Exfiltration
by Vincent Meyers, Michael Hefenbrock, Dennis Gnad and Mehdi Tahoori
Cryptography 2025, 9(1), 5; https://doi.org/10.3390/cryptography9010005 - 7 Jan 2025
Viewed by 658
Abstract
Neural networks have become pivotal in advancing applications across various domains, including healthcare, finance, surveillance, and autonomous systems. To achieve low latency and high efficiency, field-programmable gate arrays (FPGAs) are increasingly being employed as accelerators for neural network inference in cloud and edge [...] Read more.
Neural networks have become pivotal in advancing applications across various domains, including healthcare, finance, surveillance, and autonomous systems. To achieve low latency and high efficiency, field-programmable gate arrays (FPGAs) are increasingly being employed as accelerators for neural network inference in cloud and edge devices. However, the rising costs and complexity of neural network training have led to the widespread use of outsourcing of training, pre-trained models, and machine learning services, raising significant concerns about security and trust. Specifically, malicious actors may embed neural Trojans within NNs, exploiting them to leak sensitive data through side-channel analysis. This paper builds upon our prior work, where we demonstrated the feasibility of embedding Trojan side-channels in neural network weights, enabling the extraction of classification results via remote power side-channel attacks. In this expanded study, we introduced a broader range of experiments to evaluate the robustness and effectiveness of this attack vector. We detail a novel training methodology that enhanced the correlation between power consumption and network output, achieving up to a 33% improvement in reconstruction accuracy over benign models. Our approach eliminates the need for additional hardware, making it stealthier and more resistant to conventional hardware Trojan detection methods. We provide comprehensive analyses of attack scenarios in both controlled and variable environmental conditions, demonstrating the scalability and adaptability of our technique across diverse neural network architectures, such as MLPs and CNNs. Additionally, we explore countermeasures and discuss their implications for the design of secure neural network accelerators. To the best of our knowledge, this work is the first to present a passive output recovery attack on neural network accelerators, without explicit trigger mechanisms. The findings emphasize the urgent need to integrate hardware-aware security protocols in the development and deployment of neural network accelerators. Full article
(This article belongs to the Special Issue Emerging Topics in Hardware Security)
Show Figures

Figure 1

Figure 1
<p>Abstract neuron (<b>left</b>) and hardware implementation of a neuron as in [<a href="#B30-cryptography-09-00005" class="html-bibr">30</a>] (<b>right</b>).</p>
Full article ">Figure 2
<p>Routing delay sensor as in [<a href="#B31-cryptography-09-00005" class="html-bibr">31</a>].</p>
Full article ">Figure 3
<p>Overview of the attack flow. The Trojan was injected at the training stage. Afterwards, the accelerator was compiled with FINN and deployed on an FPGA platform.</p>
Full article ">Figure 4
<p>Exemplary floor plan of a CNN mapped onto the FPGA. Layers: (1) <span style="background:black"><span style="color:yellow"><b>Yellow</b></span></span>, (2) <span style="background:black"><span style="color:violet"><b>Violet</b></span></span>, (3) <span style="background:black"><span style="color:green"><b>Green</b></span></span>, (4) <span style="background:black"><span style="color:pink"><b>Pink</b></span></span>, (5) <span style="background:black"><span style="color:blue"><b>Blue</b></span></span>, (6) <span style="background:black"><span style="color:#BBFFFF"><b>Light blue</b></span></span>, (7) <span style="background:black"><span style="color:orange"><b>Orange</b></span></span>, (8) <span style="background:black"><span style="color:brown"><b>Brown</b></span></span>; Output layer: <span style="background:black"><span style="color:#D8BFD8"><b>Lilac</b></span></span>. RDSs: <span style="background:black"><span style="color:red"><b>Red</b></span></span>; Control logic: <span style="background:black"><span style="color:cyan"><b>Cyan</b></span></span>.</p>
Full article ">Figure 5
<p>Overview of the setup for a single device (<b>left</b>) and the complete setup (<b>right</b>) with two devices.</p>
Full article ">Figure 6
<p>Correlation coefficient of power estimate from MAC operation and classification results over 100 training epochs.</p>
Full article ">Figure 7
<p>Excerpt of normalized power traces from the inference of a benign and Trojan MLP-64, averaged by the accelerator output. Different colored lines denote different class labels, the standard deviation for each class is highlighted in the respective colors. (<b>a</b>) Benign MLP-64; (<b>b</b>) Trojan MLP-64.</p>
Full article ">Figure 7 Cont.
<p>Excerpt of normalized power traces from the inference of a benign and Trojan MLP-64, averaged by the accelerator output. Different colored lines denote different class labels, the standard deviation for each class is highlighted in the respective colors. (<b>a</b>) Benign MLP-64; (<b>b</b>) Trojan MLP-64.</p>
Full article ">Figure 8
<p>Excerpt of normalized power traces from the inference of a benign and Trojan VGG (BloodMNIST), averaged by the accelerator output. Different colored lines denote different class labels, the standard deviation for each class is highlighted in the respective colors. (<b>a</b>) Benign VGG (BloodMNIST); (<b>b</b>) Trojan VGG (BloodMNIST); (<b>c</b>) Benign VGG (BloodMNIST) zoomed in to the first 1024 samples for class 5; (<b>d</b>) Trojan VGG (BloodMNIST) zoomed in to the first 1024 samples for class 5.</p>
Full article ">Figure 8 Cont.
<p>Excerpt of normalized power traces from the inference of a benign and Trojan VGG (BloodMNIST), averaged by the accelerator output. Different colored lines denote different class labels, the standard deviation for each class is highlighted in the respective colors. (<b>a</b>) Benign VGG (BloodMNIST); (<b>b</b>) Trojan VGG (BloodMNIST); (<b>c</b>) Benign VGG (BloodMNIST) zoomed in to the first 1024 samples for class 5; (<b>d</b>) Trojan VGG (BloodMNIST) zoomed in to the first 1024 samples for class 5.</p>
Full article ">Figure 9
<p>Comparison of the effect of different amounts of averaging on the accuracy of the output recovery with measurements taken at room temperature.</p>
Full article ">Figure 10
<p>Excerpt of normalized power traces from the inference of a benign and Trojan MLP-256 with <math display="inline"><semantics> <mrow> <mn>100</mn> <mo>×</mo> </mrow> </semantics></math> averaging, grouped by accelerator output. Different colored lines denote different class labels, the standard deviation for each class is highlighted in their respective colors. (<b>a</b>) Benign MLP-256; (<b>b</b>) Trojan MLP-256.</p>
Full article ">Figure 10 Cont.
<p>Excerpt of normalized power traces from the inference of a benign and Trojan MLP-256 with <math display="inline"><semantics> <mrow> <mn>100</mn> <mo>×</mo> </mrow> </semantics></math> averaging, grouped by accelerator output. Different colored lines denote different class labels, the standard deviation for each class is highlighted in their respective colors. (<b>a</b>) Benign MLP-256; (<b>b</b>) Trojan MLP-256.</p>
Full article ">Figure 11
<p>Per sample time step inter-class average absolute distance of the measured power traces grouped by the type of model for the MLP-256. Inter-class was calculated as the distance between classes. (<b>a</b>) Inter-class average absolute distance at <math display="inline"><semantics> <mrow> <mn>10</mn> <mo>×</mo> </mrow> </semantics></math> averaging; (<b>b</b>) inter-class average absolute distance at <math display="inline"><semantics> <mrow> <mn>100</mn> <mo>×</mo> </mrow> </semantics></math> averaging.</p>
Full article ">Figure 11 Cont.
<p>Per sample time step inter-class average absolute distance of the measured power traces grouped by the type of model for the MLP-256. Inter-class was calculated as the distance between classes. (<b>a</b>) Inter-class average absolute distance at <math display="inline"><semantics> <mrow> <mn>10</mn> <mo>×</mo> </mrow> </semantics></math> averaging; (<b>b</b>) inter-class average absolute distance at <math display="inline"><semantics> <mrow> <mn>100</mn> <mo>×</mo> </mrow> </semantics></math> averaging.</p>
Full article ">Figure 12
<p>Per sample time step intra-class average absolute distance of the measured power traces grouped by the type of model for the MLP-256. Intra-class was calculated from the samples of a class. (<b>a</b>) Intra-class average absolute distance at <math display="inline"><semantics> <mrow> <mn>10</mn> <mo>×</mo> </mrow> </semantics></math> averaging; (<b>b</b>) intra-class average absolute distance at <math display="inline"><semantics> <mrow> <mn>100</mn> <mo>×</mo> </mrow> </semantics></math> averaging.</p>
Full article ">Figure 13
<p>Comparison of the effect of 10–100× averaging on the accuracy of the output recovery with measurements taken at different temperatures.</p>
Full article ">Figure 14
<p>Accuracy scores for output recovery on four different networks running on device#1 with benign and Trojan variants. The guessing accuracy (1/#classes) is shown in red.</p>
Full article ">
20 pages, 4570 KiB  
Article
Transferable Targeted Adversarial Attack on Synthetic Aperture Radar (SAR) Image Recognition
by Sheng Zheng, Dongshen Han, Chang Lu, Chaowen Hou, Yanwen Han, Xinhong Hao and Chaoning Zhang
Remote Sens. 2025, 17(1), 146; https://doi.org/10.3390/rs17010146 - 3 Jan 2025
Viewed by 524
Abstract
Deep learning models have been widely applied to synthetic aperture radar (SAR) target recognition, offering end-to-end feature extraction that significantly enhances recognition performance. However, recent studies show that optical image recognition models are widely vulnerable to adversarial examples, which fool the models by [...] Read more.
Deep learning models have been widely applied to synthetic aperture radar (SAR) target recognition, offering end-to-end feature extraction that significantly enhances recognition performance. However, recent studies show that optical image recognition models are widely vulnerable to adversarial examples, which fool the models by adding imperceptible perturbation to the input. Although the targeted adversarial attack (TAA) has been realized in the white box setup with full access to the SAR model’s knowledge, it is less practical in real-world scenarios where white box access to the target model is not allowed. To the best of our knowledge, our work is the first to explore transferable TAA on SAR models. Since contrastive learning (CL) is commonly applied to enhance a model’s generalization, we utilize it to improve the generalization of adversarial examples generated on a source model to unseen target models in the black box scenario. Thus, we propose the contrastive learning-based targeted adversarial attack, termed CL-TAA. Extensive experiments demonstrated that our proposed CL-TAA can significantly improve the transferability of adversarial examples to fool the SAR models in the black box scenario. Full article
Show Figures

Figure 1

Figure 1
<p>Illustration of traditional contrastive learning (CL) and our proposed contrastive learning-based targeted adversarial attack (CL-TAA). There are two main differences between traditional CL and our method. First, traditional CL aims to improve the model’s generalization to hard samples, while our CL-TAA focuses on enhancing the generalization of adversarial examples across different black box models. Another difference is that traditional CL is typically used during the pre-training stage, whereas our CL-TAA is applied during the training.</p>
Full article ">Figure 2
<p>SAR images for ten classes in MSTAR dataset and their corresponding optical images.</p>
Full article ">Figure 3
<p>Targeted transfer success rates (%) on different models.</p>
Full article ">Figure 4
<p>Visualization of the heatmap for different models.</p>
Full article ">Figure 5
<p>Visualization of logits used for CL-TAA.</p>
Full article ">Figure 6
<p>Adversarial examples generated by CL-TAA.</p>
Full article ">Figure 7
<p>Effect of the number of iterations. We report the results of different iterations under the black box setting by using AMS-CNN as the source model.</p>
Full article ">
19 pages, 3804 KiB  
Article
SAR-PATT: A Physical Adversarial Attack for SAR Image Automatic Target Recognition
by Binyan Luo, Hang Cao, Jiahao Cui, Xun Lv, Jinqiang He, Haifeng Li and Chengli Peng
Remote Sens. 2025, 17(1), 21; https://doi.org/10.3390/rs17010021 - 25 Dec 2024
Viewed by 665
Abstract
Deep neural network-based synthetic aperture radar (SAR) automatic target recognition (ATR) systems are susceptible to attack by adversarial examples, which leads to misclassification by the SAR ATR system, resulting in theoretical model robustness problems and security problems in practice. Inspired by optical images, [...] Read more.
Deep neural network-based synthetic aperture radar (SAR) automatic target recognition (ATR) systems are susceptible to attack by adversarial examples, which leads to misclassification by the SAR ATR system, resulting in theoretical model robustness problems and security problems in practice. Inspired by optical images, current SAR ATR adversarial example generation is performed in the image domain. However, the imaging principle of SAR images is based on the imaging of the echo signals interacting between the SAR and objects. Generating adversarial examples only in the image domain cannot change the physical world to achieve adversarial attacks. To solve these problems, this article proposes a framework for generating SAR adversarial examples in a 3D physical scene. First, adversarial attacks are implemented in the 2D image space, and the perturbation in the image space is converted into simulated rays that constitute SAR images through backpropagation optimization methods. The mapping between the simulated rays constituting SAR images and the 3D model is established through coordinate transformation, and point correspondence to triangular faces and intensity values to texture parameters are established. Thus, the simulated rays constituting SAR images are mapped to the 3D model, and the perturbation in the 2D image space is converted back to the 3D physical space to obtain the position and intensity of the perturbation in the 3D physical space, thereby achieving physical adversarial attacks. The experimental results show that our attack method can effectively perform SAR adversarial attacks in the physical world. In the digital world, we achieved an average fooling rate of up to 99.02% for three objects in six classification networks. In the physical world, we achieved an average fooling rate of up to 97.87% for these objects, with a certain degree of transferability across the six different network architectures. To the best of our knowledge, this is the first work to implement physical attacks in a full physical simulation condition. Our research establishes a theoretical foundation for the future concealment of SAR targets in practical settings and offers valuable insights for enhancing the attack and defense capabilities of subsequent DNNs in SAR ATR systems. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>In the SAR-PATT framework, we first generate adversarial perturbations in the SAR geometry data via an optimization method. We design a loss function to update the perturbations through backpropagation, thereby obtaining the final perturbations. Finally, these perturbations are mapped to the texture information of the 3D model through the mapping module.</p>
Full article ">Figure 2
<p>The triangular faces that compose the 3D model. When mapping the scattering points to the triangular faces, due to the different areas of the triangular faces, a larger triangular face can correspond to multiple scattering points. Nonetheless, we can select only one scattering point as a reference to calculate the texture parameters of the triangular face.</p>
Full article ">Figure 3
<p>The rendered images of the 3D models we used.</p>
Full article ">Figure 4
<p>Comparison of the generated SAR simulation images and real scene SAR images in the MSTAR dataset, with the real SAR images at the top and the SAR simulation images at the bottom.</p>
Full article ">Figure 5
<p>The difference between our attack and other attacks.</p>
Full article ">Figure 6
<p>Generated adversarial examples and their original labels and misclassified labels. The confidence levels are listed. These adversarial examples are generated on ResNet-50.</p>
Full article ">Figure 7
<p>The positions of the physical adversarial perturbations generated on three types of 3D models. From top to bottom, they are generated by ResNet-50, VGG-16, DenseNet-121, MobileNetV2, SqueezeNet, and ShuffleNetV2. The triangular faces with adversarial texture parameters are marked in red, and the number of triangular faces is labeled below the image. It can be observed that the perturbed triangular faces have different sizes.</p>
Full article ">Figure 7 Cont.
<p>The positions of the physical adversarial perturbations generated on three types of 3D models. From top to bottom, they are generated by ResNet-50, VGG-16, DenseNet-121, MobileNetV2, SqueezeNet, and ShuffleNetV2. The triangular faces with adversarial texture parameters are marked in red, and the number of triangular faces is labeled below the image. It can be observed that the perturbed triangular faces have different sizes.</p>
Full article ">
22 pages, 28158 KiB  
Article
Edge-Aware Dual-Task Image Watermarking Against Social Network Noise
by Hao Jiang, Jiahao Wang, Yuhan Yao, Xingchen Li, Feifei Kou, Xinkun Tang and Limei Qi
Appl. Sci. 2025, 15(1), 57; https://doi.org/10.3390/app15010057 - 25 Dec 2024
Viewed by 637
Abstract
In the era of widespread digital image sharing on social media platforms, deep-learning-based watermarking has shown great potential in copyright protection. To address the fundamental trade-off between the visual quality of the watermarked image and the robustness of watermark extraction, we explore the [...] Read more.
In the era of widespread digital image sharing on social media platforms, deep-learning-based watermarking has shown great potential in copyright protection. To address the fundamental trade-off between the visual quality of the watermarked image and the robustness of watermark extraction, we explore the role of structural features and propose a novel edge-aware watermarking framework. Our primary innovation lies in the edge-aware secret hiding module (EASHM), which achieves adaptive watermark embedding by aligning watermarks with image structural features. To realize this, the EASHM leverages knowledge distillation from an edge detection teacher and employs a dual-task encoder that simultaneously performs edge detection and watermark embedding through maximal parameter sharing. The framework is further equipped with a social network noise simulator (SNNS) and a secret recovery module (SRM) to enhance robustness against common image noise attacks. Extensive experiments on three public datasets demonstrate that our framework achieves superior watermark imperceptibility, with PSNR and SSIM values exceeding 40.82 dB and 0.9867, respectively, while maintaining an over 99% decoding accuracy under various noise attacks, outperforming existing methods by significant margins. Full article
(This article belongs to the Topic Intelligent Image Processing Technology)
Show Figures

Figure 1

Figure 1
<p>Framework overview of the proposed digital watermarking framework, including three key components: Edge-Aware Secret Hiding Module (EASHM), Social Network Noise Simulator (SNNS), and Secret Recovery Module (SRM).</p>
Full article ">Figure 2
<p>Secret diffusion process. The pipeline of it consists of transforming the binary secret message <math display="inline"><semantics> <msub> <mi>M</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>c</mi> </mrow> </msub> </semantics></math> through linear mapping, followed by reshape and upscaling operations to generate the information image <math display="inline"><semantics> <msub> <mi>I</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>c</mi> </mrow> </msub> </semantics></math>, which matches the dimensions of the cover image <math display="inline"><semantics> <msub> <mi>I</mi> <mrow> <mi>c</mi> <mi>o</mi> <mi>v</mi> </mrow> </msub> </semantics></math>.</p>
Full article ">Figure 3
<p>Visual examples illustrating the edge-aware secret hiding module’s performance, displaying the cover image (<math display="inline"><semantics> <msub> <mi>I</mi> <mrow> <mi>c</mi> <mi>o</mi> <mi>v</mi> </mrow> </msub> </semantics></math>), watermarked image (<math display="inline"><semantics> <msub> <mi>I</mi> <mrow> <mi>w</mi> <mi>m</mi> </mrow> </msub> </semantics></math>), edge detection teacher output (<math display="inline"><semantics> <msub> <mi>E</mi> <mrow> <mi>t</mi> <mi>e</mi> <mi>a</mi> <mi>c</mi> <mi>h</mi> </mrow> </msub> </semantics></math>), edge detection output (<math display="inline"><semantics> <msub> <mi>E</mi> <mrow> <mi>p</mi> <mi>r</mi> <mi>e</mi> <mi>d</mi> </mrow> </msub> </semantics></math>), and watermark embedding output (<math display="inline"><semantics> <msub> <mi>I</mi> <mrow> <mi>h</mi> <mi>i</mi> <mi>d</mi> <mi>e</mi> </mrow> </msub> </semantics></math>) from top to bottom.</p>
Full article ">Figure 4
<p>A typical propagation scenario of a watermarked image through social networks and editing tools.</p>
Full article ">Figure 5
<p>Visualization of noise attacks simulated by SNNS. The left panel shows the cover image <math display="inline"><semantics> <msub> <mi>I</mi> <mrow> <mi>c</mi> <mi>o</mi> <mi>v</mi> </mrow> </msub> </semantics></math> and the watermarked image <math display="inline"><semantics> <msub> <mi>I</mi> <mrow> <mi>w</mi> <mi>m</mi> </mrow> </msub> </semantics></math>. The right panel shows the effects of different noise attacks from <math display="inline"><semantics> <mrow> <mi>SNNS</mi> <msup> <mrow> <mi>pool</mi> </mrow> <mrow> <mi>single</mi> </mrow> </msup> </mrow> </semantics></math>, where the upper row displays the noise-attacked watermarked images <math display="inline"><semantics> <mrow> <mi>N</mi> <mi>I</mi> <mrow> <mi>w</mi> <mi>m</mi> </mrow> </mrow> </semantics></math> and the lower row shows the corresponding residual maps <math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> <mi>N</mi> </mrow> <msub> <mi>I</mi> <mrow> <mi>w</mi> <mi>m</mi> </mrow> </msub> <mo>−</mo> <msub> <mi>I</mi> <mrow> <mi>w</mi> <mi>m</mi> </mrow> </msub> <mrow> <mo>|</mo> </mrow> </mrow> </semantics></math> for cases (<b>a</b>–<b>h</b>). While the framework implements cropout and dropout, which require cover image information, we demonstrate their effects using simpler crop and drop operations for clearer interpretation of the noise patterns.</p>
Full article ">
Back to TopTop