Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (107,044)

Search Parameters:
Keywords = computational

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 5299 KiB  
Article
An Approach for Detecting Tomato Under a Complicated Environment
by Chen-Feng Long, Yu-Juan Yang, Hong-Mei Liu, Feng Su and Yang-Jun Deng
Agronomy 2025, 15(3), 667; https://doi.org/10.3390/agronomy15030667 (registering DOI) - 7 Mar 2025
Abstract
Tomato is one of the most popular and widely cultivated fruits and vegetables in the world. In large-scale cultivation, manual picking is inefficient and labor-intensive, which is likely to lead to a decline in the quality of the fruits. Although mechanical picking can [...] Read more.
Tomato is one of the most popular and widely cultivated fruits and vegetables in the world. In large-scale cultivation, manual picking is inefficient and labor-intensive, which is likely to lead to a decline in the quality of the fruits. Although mechanical picking can improve efficiency, it is affected by factors such as leaf occlusion and changes in light conditions in the tomato growth environment, resulting in poor detection and recognition results. To address these challenges, this study proposes a tomato detection method based on Graph-CenterNet. The method employs Vision Graph Convolution (ViG) to replace traditional convolutions, thereby enhancing the flexibility of feature extraction, while reducing one downsampling layer to strengthen global information capture. Furthermore, the Coordinate Attention (CA) module is introduced to optimize the processing of key information through correlation computation and weight allocation mechanisms. Experiments conducted on the Tomato Detection dataset demonstrate that the proposed method achieves average precision improvements of 7.94%, 10.58%, and 1.24% compared to Faster R-CNN, CenterNet, and YOLOv8, respectively. The results indicate that the improved Graph-CenterNet method significantly enhances the accuracy and robustness of tomato detection in complex environments. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

Figure 1
<p>Tomato images in a complex environment. (<b>a</b>) Leaf occlusion; (<b>b</b>) Backlighting; and (<b>c</b>) Fruit overlap.</p>
Full article ">Figure 2
<p>Data enhancement. (<b>a</b>) Original image; (<b>b</b>) Enhanced image.</p>
Full article ">Figure 3
<p>Structure of the Graph-CenterNet model.</p>
Full article ">Figure 4
<p>Data augmentation rendering.</p>
Full article ">Figure 5
<p>The results of different layers. (<b>a</b>) Three layers of multiscale fusion; (<b>b</b>) Two layers of multiscale fusion.</p>
Full article ">Figure 6
<p>The loss value curve of the Tomato Detection.</p>
Full article ">Figure 7
<p>Detection effect of the Tomato Detection Dataset. (<b>a</b>) Original drawing; (<b>b</b>) CenterNet; (<b>c</b>) Faster R-CNN; (<b>d</b>) YOLOv8; and (<b>e</b>) Graph-CenterNet.</p>
Full article ">Figure 8
<p>Detection effect of the cherry tomato1 Computer Vision Project. (<b>a</b>) Graph-CenterNet; (<b>b</b>) YOLOv8.</p>
Full article ">
28 pages, 8967 KiB  
Article
Adaptive Global Dense Nested Reasoning Network into Small Target Detection in Large-Scale Hyperspectral Remote Sensing Image
by Siyu Zhan, Yuxuan Yang, Muge Zhong, Guoming Lu and Xinyu Zhou
Remote Sens. 2025, 17(6), 948; https://doi.org/10.3390/rs17060948 (registering DOI) - 7 Mar 2025
Abstract
Small and dim target detection is a critical challenge in hyperspectral remote sensing, particularly in complex, large-scale scenes where spectral variability across diverse land cover types complicates the detection process. In this paper, we propose a novel target reasoning algorithm named Adaptive Global [...] Read more.
Small and dim target detection is a critical challenge in hyperspectral remote sensing, particularly in complex, large-scale scenes where spectral variability across diverse land cover types complicates the detection process. In this paper, we propose a novel target reasoning algorithm named Adaptive Global Dense Nested Reasoning Network (AGDNR). This algorithm integrates spatial, spectral, and domain knowledge to enhance the detection accuracy of small and dim targets in large-scale environments and simultaneously enables reasoning about target categories. The proposed method involves three key innovations. Firstly, we develop a high-dimensional, multi-layer nested U-Net that facilitates cross-layer feature transfer, preserving high-level features of small and dim targets throughout the network. Secondly, we present a novel approach for computing physicochemical parameters, which enhances the spectral characteristics of targets while minimizing environmental interference. Thirdly, we construct a geographic knowledge graph that incorporates both target and environmental information, enabling global target reasoning and more effective detection of small targets across large-scale scenes. Experimental results on three challenging datasets show that our method outperforms state-of-the-art approaches in detection accuracy and achieves successful classification of different small targets. Consequently, the proposed method offers a robust solution for the precise detection of hyperspectral small targets in large-scale scenarios. Full article
Show Figures

Figure 1

Figure 1
<p>Overview of the hyperspectral target detection framework. (<b>a</b>) Feature extraction module. (<b>b</b>) Surface feature extraction module. (<b>c</b>) Pixel-level knowledge reasoning module. The proposed framework begins by extracting the original features of the image using a deeply nested convolutional neural network. A 1 × 1 convolutional classifier is then applied for pixel-level target detection. At this stage, the convolutional kernel weights are extracted as the semantic representation of the target. These semantics are further enriched by combining them with surface semantics derived from feature extraction after NDVI classification, creating a global semantic pool. Finally, the semantics are propagated among nodes based on a priori knowledge graph, resulting in enhanced features that are utilized for the final target detection.</p>
Full article ">Figure 2
<p>The structure of U-Net and dense nested convolutional neural network. (<b>a</b>) U-net. (<b>b</b>) Dense nested convolutional neural network. In the dense nested network, each node consists of a concatenation module, a convolutional module, and an attention module. Meanwhile, the output features of each node will be input to the nodes in the lower layer, other nodes in the same layer, and the nodes in the upper layer, enabling the preservation of small target features in the deep convolution.</p>
Full article ">Figure 3
<p>Channel attention mechanism and spatial attention mechanism after multi-layer convolution.</p>
Full article ">Figure 4
<p>Global reasoning module. The global semantic pool integrates the weights of the convolutional filters and the extracted surface category features into a semantic pool of the targets and the surface type. Then, this semantic pool is propagated according to the knowledge graph for global reasoning. The attention mechanism is used to adaptively emphasize the most relevant classes of targets. Finally, a soft link of the target category and a hard link of the surface type are applied to obtain enhanced features. The enhanced features and the initial features are concatenated to obtain better detection results.</p>
Full article ">Figure 5
<p>(<b>a</b>) RGB image from the San Diego dataset. (<b>b</b>) Surface classification results based on NDVI values. (<b>c</b>) Ground truth, in which targets of different categories are represented in different colors.</p>
Full article ">Figure 6
<p>Detection maps of different methods. (<b>a</b>) RGB images of the San Diego dataset, Avon dataset, synthetic dataset, and HAD100 dataset. (<b>b</b>) Ground truth. (<b>c</b>) ACE. (<b>d</b>) CEM. (<b>e</b>) HTD-IRN. (<b>f</b>) MLSN. (<b>g</b>) TSTTD. (<b>h</b>) CS-TTD. (<b>i</b>) AGDNR.</p>
Full article ">Figure 7
<p>ROC curves of different methods. (<b>a</b>) San Diego dataset. (<b>b</b>) Avon dataset. (<b>c</b>) Synthetic dataset. (<b>d</b>) HAD100 dataset.</p>
Full article ">Figure 8
<p>Separability maps of different methods. (<b>a</b>) San Diego dataset. (<b>b</b>) Avon dataset. (<b>c</b>) Synthetic dataset. (<b>d</b>) HAD100 dataset.</p>
Full article ">Figure 9
<p>Classification and detection maps of the four datasets. (<b>a</b>) RGB image. (<b>b</b>) Ground truth. (<b>c</b>) Small target detection map. (<b>d</b>) Category 1. (<b>e</b>) Category 2. (<b>f</b>) Category 3. (<b>g</b>) Category 4.</p>
Full article ">Figure 10
<p>Classification ROC curves of four datasets. (<b>a</b>) San Diego dataset. (<b>b</b>) Masic Avon dataset. (<b>c</b>) Synthetic dataset. (<b>d</b>) HAD100 dataset.</p>
Full article ">Figure 10 Cont.
<p>Classification ROC curves of four datasets. (<b>a</b>) San Diego dataset. (<b>b</b>) Masic Avon dataset. (<b>c</b>) Synthetic dataset. (<b>d</b>) HAD100 dataset.</p>
Full article ">Figure 11
<p>Network architectures of AGDNR and its two variants. (<b>a</b>) AGDNR. (<b>b</b>) AGDNR w/o SC. (<b>c</b>) AGDNR w/o SC&amp;DS.</p>
Full article ">Figure 12
<p>Network architectures of two variants. (<b>a</b>) AGDNR without SC1. (<b>b</b>) AGDNR without SC&amp;DS1.</p>
Full article ">Figure 13
<p>Output features of nodes at different levels for AGDRN and its two variants. (<b>a</b>) RGB image of the San Diego dataset. (<b>b</b>) Ground truth. (<b>c</b>) Output features of node <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mrow> <mn>4</mn> <mo>,</mo> <mn>0</mn> </mrow> </msub> </mrow> </semantics></math>. (<b>d</b>) Output features of node <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mrow> <mn>3</mn> <mo>,</mo> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>. (<b>e</b>) Output features of node <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mrow> <mn>2</mn> <mo>,</mo> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>. (<b>f</b>) Output features of node <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mrow> <mn>1</mn> <mo>,</mo> <mn>3</mn> </mrow> </msub> </mrow> </semantics></math>. (<b>g</b>) Output features of node <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mrow> <mn>0</mn> <mo>,</mo> <mn>4</mn> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 14
<p>The <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>AUC</mi> </mrow> <mrow> <mo stretchy="false">(</mo> <mi>τ</mi> <mo>,</mo> <mi>D</mi> <mo stretchy="false">)</mo> </mrow> </msub> </mrow> </semantics></math> values of different target categories of the four datasets. (<b>a</b>) San Diego dataset. (<b>b</b>) Masic Avon dataset. (<b>c</b>) Synthetic dataset. (<b>d</b>) HAD100 dataset.</p>
Full article ">Figure 15
<p>The <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>AUC</mi> </mrow> <mrow> <mo stretchy="false">(</mo> <mi>τ</mi> <mo>,</mo> <mi>D</mi> <mo stretchy="false">)</mo> </mrow> </msub> </mrow> </semantics></math> values of AGDNR and four variants on all categories of the four datasets.</p>
Full article ">Figure 16
<p>Detection maps of ARGNR and ARGNR with surface reasoning removed. (<b>a</b>) RGB image. (<b>b</b>) Ground truth map. (<b>c</b>) Small target detection map. (<b>d</b>) Category 1. (<b>e</b>) Category 2. (<b>f</b>) Category 3.</p>
Full article ">Figure 17
<p>ROC curves of ARGNR and AGDNR w/o LSF on different target categories in San Diego dataset. (<b>a</b>) Category 1. (<b>b</b>) Category 2. (<b>c</b>) Category 3. (<b>d</b>) All Categories.</p>
Full article ">Figure 18
<p>AUC values of ARGNR and AGDNR w/o LSF on different target categories in San Diego dataset.</p>
Full article ">Figure 19
<p>The enhanced features generated by ARGNR and AGDNR w/o LSF on San Diego dataset.</p>
Full article ">
23 pages, 1939 KiB  
Article
Enhancing Mobile App Development for Sustainability: Designing and Evaluating the SBAM Design Cards
by Chiara Tancredi, Roberta Presta, Laura Mancuso and Roberto Montanari
Sustainability 2025, 17(6), 2352; https://doi.org/10.3390/su17062352 (registering DOI) - 7 Mar 2025
Abstract
Behavioral changes are critical for addressing sustainability challenges, which have become increasingly urgent due to the growing impact of global greenhouse gas emissions on ecosystems and human livelihoods. However, translating awareness into meaningful action requires practical tools to bridge this gap. Mobile applications, [...] Read more.
Behavioral changes are critical for addressing sustainability challenges, which have become increasingly urgent due to the growing impact of global greenhouse gas emissions on ecosystems and human livelihoods. However, translating awareness into meaningful action requires practical tools to bridge this gap. Mobile applications, utilizing strategies from human–computer interaction (HCI) such as gamification, nudging, and persuasive technologies, have proven to be powerful in promoting sustainable behaviors. To support designers in developing effective apps of this kind, theory-based design guidelines were created, drawing on established theories and design approaches aimed at shaping and encouraging virtuous user behaviors fostering sustainability. To make these guidelines more accessible and enhance their usability during the design phase, this study presents their transformation into the SBAM card deck, a deck of 11 design cards. The SBAM cards aim to simplify theoretical concepts, stimulate creativity, and provide structured support for design discussions, helping designers generate solutions tailored to specific project contexts. This study also evaluates the effectiveness of the SBAM cards in the design process through two workshops with design students. Results show that the cards enhance ideation, foster creativity, and improve designers’ perceived self-efficacy compared to the exploitation of the same design guidelines information presented in traditional textual formats. This paper discusses the SBAM cards design and evaluation methodology, findings, and implications, offering insights into how the SBAM design cards can bridge the gap between theory and practice in sustainability-focused mobile app development. To ensure broader accessibility, the SBAM cards have been made available to the public through a dedicated website. Full article
(This article belongs to the Special Issue Environmental Behavior and Climate Change)
Show Figures

Figure 1

Figure 1
<p>SBAM cards, developed based on the SBAM guidelines proposed by Tancredi et al. [<a href="#B17-sustainability-17-02352" class="html-bibr">17</a>], aimed at supporting the design of mobile apps fostering sustainable behaviors.</p>
Full article ">Figure 2
<p>Participants using the SBAM cards during the workshop.</p>
Full article ">Figure 3
<p>Comparison of self-efficacy scores (Entry and Exit) for the control group (<b>on the left</b>) and for the experimental group (<b>on the right</b>) in the first and second workshops. The experimental group in both workshops shows significant increases in Exit scores compared to Entry scores.</p>
Full article ">Figure 4
<p>Comparison of CSI scores between experimental and control groups across the first and second workshops. The experimental group consistently outperformed the control group, with significant differences observed in both workshops.</p>
Full article ">Figure 5
<p>Comparison of SUS scores between experimental and control groups across the first and second workshops. The experimental group consistently scored higher than the control group, with significant differences in both workshops.</p>
Full article ">Figure 6
<p>Comparison of perceived usefulness scores between experimental and control groups in the first and second workshops. The experimental group achieved higher scores in both workshops, but the differences were not statistically significant.</p>
Full article ">Figure 7
<p>Comparison of scores for the Theoretical grounding and Creativity dimensions between experimental and control groups, as rated by the design quality evaluators. Both differences are statistically significant.</p>
Full article ">Figure A1
<p>First slide of the PowerPoint template.</p>
Full article ">Figure A2
<p>Second slide of the PowerPoint template.</p>
Full article ">Figure A3
<p>Third slide of the PowerPoint template.</p>
Full article ">Figure A4
<p>Fourth slide of the PowerPoint template.</p>
Full article ">Figure A5
<p>Fifth slide of the PowerPoint template.</p>
Full article ">
19 pages, 9739 KiB  
Article
Rockfall Hazard Evaluation in a Cultural Heritage Site: Case Study of Agia Paraskevi Monastery, Monodendri, Greece
by Spyros Papaioannou, George Papathanassiou and Vassilis Marinos
Geosciences 2025, 15(3), 92; https://doi.org/10.3390/geosciences15030092 (registering DOI) - 7 Mar 2025
Abstract
Rockfall is considered the main geohazard in mountainous areas with steep morphology. The main objective of this study is to assess the rockfall hazard in the cultural heritage site of the Monastery of Agia Paraskevi, Monodendri, in northern Greece, where a recent rockfall [...] Read more.
Rockfall is considered the main geohazard in mountainous areas with steep morphology. The main objective of this study is to assess the rockfall hazard in the cultural heritage site of the Monastery of Agia Paraskevi, Monodendri, in northern Greece, where a recent rockfall event occurred, destroying a small house and the protective fence constructed to protect the Monastery of Agia Paraskevi. To evaluate the rockfall potential, engineering geological-oriented activities were carried out, such as geostructurally oriented field measurements, aiming to simulate the rockfall path and to compute the kinetic energy and the runout distance. In addition, using remote sensing tools such as Unmanned Aerial Vehicles (UAVs), we were able to inspect the entire slope face and detect the locations of detached blocks by measuring their volume. As a result, it was concluded that the average volume of the expected detached blocks is around 1.2 m3, while the maximum kinetic energy along a rockfall trajectory ranges from 1850 to 2830 kJ, depending on the starting point (source). Furthermore, we discussed the level of similarity between the outcomes arising from the data obtained by the traditional field survey and the UAV campaigns regarding the structural analysis of discontinuity sets. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Map showing the location of the study area in the northwestern part of Greece and the two earthquake epicenters (red and orange stars for the Mw 5.4 and Mw 4.6 earthquakes, respectively); (<b>b</b>) photo showing the Agia Paraskevi Monastery and the steep slope of limestone (photo taken on 21 October, 2021; (<b>c</b>) morphology of the Vikos Gorge showing the location of the Agia Paraskevi Monastery (produced from Google Maps, 2025).</p>
Full article ">Figure 2
<p>(<b>a</b>,<b>b</b>) Rockfalls at a distance of 20 and 60 m, respectively, from the Monastery of Agia Paraskevi. The image below (<b>a</b>) shows the destroyed rockfall protection netting from the rockfall event in October 2021.</p>
Full article ">Figure 3
<p>Flowchart of the methodology applied for the purposes of this study. (SfM: structure from motion; UAV: Unmanned Aerial Vehicle; DSE: Discontinuity Set Extractor; DS: Discontinuity Set).</p>
Full article ">Figure 4
<p>Rock slope of the study area delineating the sites where traditional (site D) and UAV-based structural analysis (sites A, B, and C) were carried out. In addition, the location of the Monastery of Agia Paraskevi and the entrance to Vikos Gorge are shown.</p>
Full article ">Figure 5
<p>Stereographic projection (lower hemisphere, equal area) of discontinuity poles and main sets resulting from geological compass-derived data (site D).</p>
Full article ">Figure 6
<p>Site A. Stereonet of the density of the normal vector’s poles developed based on the Discontinuity Set Extractor (DSE).</p>
Full article ">Figure 7
<p>Site B. (<b>a</b>) Stereonet of the density of the normal vector’s poles developed based on the Discontinuity Set Extractor (DSE). (<b>b</b>) Three-dimensional view of the discontinuity sets defined by the DSE; bedding is shown in blue, J1 in red, and J2 in green. It should be noted that the export of the DSE provides a chromatic identification of the discontinuity sets without indicating the orientation of each joint.</p>
Full article ">Figure 8
<p>Site C. Stereonet of the density of the normal vector’s poles developed based on the Discontinuity Set Extractor (DSE).</p>
Full article ">Figure 9
<p>Stereographic projection (lower hemisphere, equal area) of discontinuity poles and main sets based on geological field survey (site D) and UAV-derived data (Sites A, B, and C). The green color indicates the bedding, the red color indicates J1, and the orange color indicates J2.</p>
Full article ">Figure 10
<p>Kinematic analysis for (<b>a</b>) planar sliding, (<b>b</b>) wedge failure, (<b>c</b>) toppling phenomena, and (<b>d</b>) toppling for vertical joints of similar strike with a slope face that dips into the slope.</p>
Full article ">Figure 11
<p>Overview of the areas from where rock blocks had been detached (source areas), depicted with red color.</p>
Full article ">Figure 12
<p>Traces of the trajectories (tr1, tr2, and tr3) used to determine the parameters of the rockfall. The path is shown as yellow dashed line and trajectories as red.</p>
Full article ">Figure 13
<p>Two-dimensional rockfall trajectories, tr1, tr2, and tr3, in the area between the Monastery of Agia Paraskevi (northeast of Monodendri village) and the entrance to the Vikos Gorge. Yellow color refers to the clean hard bedrock,green color to bedrock with little soil or vegetation and red color to rockfall trajectories. The location of suggested barriers is also shown as a black-color line segment.</p>
Full article ">
12 pages, 7869 KiB  
Article
Design of an E-Band Multiplexer Based on Turnstile Junction
by Shaohang Li, Yuan Yao, Xiaohe Cheng and Junsheng Yu
Electronics 2025, 14(6), 1072; https://doi.org/10.3390/electronics14061072 (registering DOI) - 7 Mar 2025
Abstract
This paper presents an E-band four-channel multiplexer based on a turnstile junction. The proposed multiplexer consists of a power distribution unit featuring a turnstile junction topology and four Chebyshev bandpass filters. Thanks to the implementation of a rotating gate connection structure as the [...] Read more.
This paper presents an E-band four-channel multiplexer based on a turnstile junction. The proposed multiplexer consists of a power distribution unit featuring a turnstile junction topology and four Chebyshev bandpass filters. Thanks to the implementation of a rotating gate connection structure as the distribution unit, the overall compactness was enhanced, and the complexity of optimization was significantly reduced. Furthermore, this configuration offers a well-organized spatial port distribution, facilitating scalability for additional channels. According to the frequency band planning and design requirements of the communication system, an E-band four-channel multiplexer was designed and manufactured using high-precision computer numerical control (CNC) milling technology, achieving an error margin of ±5 μm. The experimental results indicate that the passbands are 70.6–73.07 GHz, 73.7–76.07 GHz, 82.55–82.9 GHz, and 83.4–85.9 GHz. The in-band insertion loss of each channel is below 1.7 dB, while the return loss at the common port exceeds 12 dB. The measured results align closely with simulations, demonstrating promising potential for practical applications. Full article
(This article belongs to the Section Microwave and Wireless Communications)
Show Figures

Figure 1

Figure 1
<p>Physical structure and dimensions of the turnstile junction.</p>
Full article ">Figure 2
<p>Electric field distributions and schematic diagram of the turnstile junction.</p>
Full article ">Figure 3
<p>Schematic diagram of energy transmission in the turnstile junction. The <span class="html-italic">a</span><sub>Ei</sub><sup>+</sup> and <span class="html-italic">a</span><sub>Ei</sub><sup>−</sup> represent the input E-field intensity and the output E-field intensity of Port 1, respectively. (<span class="html-italic">i</span> = 1, 2). The <span class="html-italic">a</span><sub>j</sub><sup>+</sup> and <span class="html-italic">a</span><sub>j</sub><sup>−</sup> represent the output E-field intensity and the input E-field intensity of Port j, respectively. (<span class="html-italic">j</span> = 2, 3, 4, 5).</p>
Full article ">Figure 4
<p>Simulated S-parameter results of the turnstile junction.</p>
Full article ">Figure 5
<p>Physical structure and dimensions of the bandpass filter.</p>
Full article ">Figure 6
<p>(<b>a</b>) Effects of <span class="html-italic">w</span><sub>12</sub> on the coupling coefficient; (<b>b</b>) effects of <span class="html-italic">w</span><sub>i</sub> on <span class="html-italic">Q</span><sub>ex</sub>.</p>
Full article ">Figure 7
<p>Simulation results of each bandpass filter.</p>
Full article ">Figure 7 Cont.
<p>Simulation results of each bandpass filter.</p>
Full article ">Figure 8
<p>(<b>a</b>) Distributed model and (<b>b</b>) physical structure and dimensions of the turnstile junction multiplexer.</p>
Full article ">Figure 9
<p>Simulated S-parameter results of the turnstile junction multiplexer.</p>
Full article ">Figure 10
<p>The electric field distribution at different frequencies.</p>
Full article ">Figure 11
<p>The fabrication model of the turnstile junction multiplexer.</p>
Full article ">Figure 12
<p>The fabricated prototype and test scenario.</p>
Full article ">Figure 13
<p>The simulated and measured results of the turnstile junction multiplexer.</p>
Full article ">
13 pages, 2163 KiB  
Article
ViBEx: A Visualization Tool for Gene Expression Analysis
by Michael H. Terrefortes-Rosado, Andrea V. Nieves-Rivera, Humberto Ortiz-Zuazaga and Marie Lluberes-Contreras
BioMedInformatics 2025, 5(1), 13; https://doi.org/10.3390/biomedinformatics5010013 (registering DOI) - 7 Mar 2025
Abstract
Background: Variations in the states of Gene Regulatory Networks significantly influence disease outcomes and drug development. Boolean Networks serve as a tool to conceptualize and understand the complex relationships between genes. Threshold computation methods are used for the binarization of gene expression and [...] Read more.
Background: Variations in the states of Gene Regulatory Networks significantly influence disease outcomes and drug development. Boolean Networks serve as a tool to conceptualize and understand the complex relationships between genes. Threshold computation methods are used for the binarization of gene expression and the Boolean representation of its Gene Regulatory Network. This study aims to provide a platform that facilitates the exploration of the impact of different threshold computation methods on the binarization of gene expression and the subsequent Boolean representation of Gene Regulatory Networks. Methods: Threshold computation methods are implemented for binarizing gene expression, enabling the Boolean representation of the Gene Regulatory Networks. Variations in gene expression discretization and threshold computation methods often lead to differing Boolean representations, which may affect the subsequent analysis. Lluberes proposed a framework for analyzing gene expression when binarization varies based on these factors. This theoretical framework was implemented using the Python Dash framework. Results: A visualization tool has been developed to implement this framework. The tool allows users to upload gene expression datasets and interact with a dashboard to explore gene expression binarization and the inferred Boolean Networks. Conclusions: The developed visualization tool provides a platform that facilitates the exploration of how different binarization methods impact the interpretation of Gene Regulatory Networks, offering insights for disease research and drug development. Full article
(This article belongs to the Special Issue Editor's Choices Series for Methods in Biomedical Informatics Section)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Model uncertainty and (<b>b</b>) discretization uncertainty using gene DDR1. See <a href="#biomedinformatics-05-00013-t0A1" class="html-table">Table A1</a> and dataset in [<a href="#B6-biomedinformatics-05-00013" class="html-bibr">6</a>].</p>
Full article ">Figure 2
<p>(<b>a</b>) Regulatory Network. (<b>b</b>) Boolean functions. (<b>c</b>) Boolean Network.</p>
Full article ">Figure 3
<p>Application framework and structure.</p>
Full article ">Figure 4
<p>Landing page.</p>
Full article ">Figure 5
<p>Interactive data table. Selected rows are genes RFC2, PAX8, and GUCA1A. See <a href="#biomedinformatics-05-00013-t0A1" class="html-table">Table A1</a> and dataset in [<a href="#B6-biomedinformatics-05-00013" class="html-bibr">6</a>].</p>
Full article ">Figure 6
<p>Selecting methods and binarizing using genes RFC2, PAX8, and GUCA1A. See <a href="#biomedinformatics-05-00013-t0A1" class="html-table">Table A1</a> and dataset in [<a href="#B6-biomedinformatics-05-00013" class="html-bibr">6</a>].</p>
Full article ">Figure 7
<p>Binarization tab using genes RFC2, PAX8, and GUCA1A. See <a href="#biomedinformatics-05-00013-t0A1" class="html-table">Table A1</a> and dataset in [<a href="#B6-biomedinformatics-05-00013" class="html-bibr">6</a>].</p>
Full article ">Figure 8
<p>Thresholds and gene expression for gene GUCA1A. See <a href="#biomedinformatics-05-00013-t0A1" class="html-table">Table A1</a> and dataset in [<a href="#B6-biomedinformatics-05-00013" class="html-bibr">6</a>].</p>
Full article ">Figure 9
<p>Threshold displacement for gene GUCA1A. See <a href="#biomedinformatics-05-00013-t0A1" class="html-table">Table A1</a> and dataset in [<a href="#B6-biomedinformatics-05-00013" class="html-bibr">6</a>].</p>
Full article ">Figure 10
<p>Statistics tab gene CCL5, see <a href="#biomedinformatics-05-00013-t0A1" class="html-table">Table A1</a> and dataset in [<a href="#B6-biomedinformatics-05-00013" class="html-bibr">6</a>].</p>
Full article ">Figure 11
<p>Boolean Network graph genes RFC2, PAX8, and GUCA1A used, found in <a href="#biomedinformatics-05-00013-t0A1" class="html-table">Table A1</a> and in dataset [<a href="#B6-biomedinformatics-05-00013" class="html-bibr">6</a>].</p>
Full article ">Figure 12
<p>Network state tables for genes RFC2, PAX8, and GUCA1A. See <a href="#biomedinformatics-05-00013-t0A1" class="html-table">Table A1</a> and dataset in [<a href="#B6-biomedinformatics-05-00013" class="html-bibr">6</a>].</p>
Full article ">Figure 13
<p>Boolean Network from transition rules (<a href="#biomedinformatics-05-00013-t0A2" class="html-table">Table A2</a>).</p>
Full article ">Figure 14
<p>(<b>a</b>) Editing a value and (<b>b</b>) value changed and network updated genes RFC2, PAX8, and GUCA1A, see <a href="#biomedinformatics-05-00013-t0A1" class="html-table">Table A1</a> and dataset in [<a href="#B6-biomedinformatics-05-00013" class="html-bibr">6</a>].</p>
Full article ">
19 pages, 748 KiB  
Article
Cyberbullying Perpetration and Socio-Behavioral Correlates in Italian and Spanish Preadolescents: A Cross-National Study and Serial Mediation Analysis
by Gianluca Mariano Colella, Rocco Carmine Servidio, Anna Lisa Palermiti, Maria Giuseppina Bartolo, Paula García-Carrera, Rosario Ortega-Ruiz and Eva M. Romera
Int. J. Environ. Res. Public Health 2025, 22(3), 389; https://doi.org/10.3390/ijerph22030389 (registering DOI) - 7 Mar 2025
Abstract
The spread of information and communication technologies (ICTs) has brought advantages and disadvantages, particularly impacting youth, who use the Internet and social media applications daily. In preadolescents’ social development, problematic social media use (PSMU) and cyberbullying (CB) are potential risk factors across several [...] Read more.
The spread of information and communication technologies (ICTs) has brought advantages and disadvantages, particularly impacting youth, who use the Internet and social media applications daily. In preadolescents’ social development, problematic social media use (PSMU) and cyberbullying (CB) are potential risk factors across several countries. PSMU is defined as the lack of regulation of social media platforms that is associated with negative outcomes in everyday life, while CB refers to using digital technology to harass, threaten, or embarrass another person. Among preadolescents, CB perpetration is frequently associated with cybervictimization (CV) experiences. The underlying mechanisms that drive this relationship have received limited attention. The aim of the cross-national comparative study, rooted in the general aggression model, is to investigate the direct and indirect effects between cyberbullying perpetration and cybervictimization, testing a model involving PSMU and moral disengagement (MD) as serial mediators in this association. A total of 895 Italian and Spanish preadolescents (Mage = 11.23, SDage = 1.064) completed a self-report survey during school hours. Descriptive statistics were computed, and a serial mediation model was run. The results show that CV is positively associated with CB, and that PSMU and MD positively serially mediate the CV–CB link. This study’s insights suggest the need for tailored educational interventions targeting European youth, to promote more positive online social interactions and a safer digital environment. Full article
Show Figures

Figure 1

Figure 1
<p>Theoretical model of serial mediation effects, linking cybervictimization and cyberbullying through the serial mediating actions of PSMU and MD.</p>
Full article ">Figure 2
<p>Non-standardized estimate results from the serial mediating research model. ** <span class="html-italic">p</span> &gt; 0.001.</p>
Full article ">
19 pages, 13823 KiB  
Article
Autonomous Agricultural Robot Using YOLOv8 and ByteTrack for Weed Detection and Destruction
by Ardin Bajraktari and Hayrettin Toylan
Machines 2025, 13(3), 219; https://doi.org/10.3390/machines13030219 (registering DOI) - 7 Mar 2025
Abstract
Automating agricultural machinery presents a significant opportunity to lower costs and enhance efficiency in both current and future field operations. The detection and destruction of weeds in agricultural areas via robots can be given as an example of this process. Deep learning algorithms [...] Read more.
Automating agricultural machinery presents a significant opportunity to lower costs and enhance efficiency in both current and future field operations. The detection and destruction of weeds in agricultural areas via robots can be given as an example of this process. Deep learning algorithms can accurately detect weeds in agricultural fields. Additionally, robotic systems can effectively eliminate these weeds. However, the high computational demands of deep learning-based weed detection algorithms pose challenges for their use in real-time applications. This study proposes a vision-based autonomous agricultural robot that leverages the YOLOv8 model in combination with ByteTrack to achieve effective real-time weed detection. A dataset of 4126 images was used to create YOLO models, with 80% of the images designated for training, 10% for validation, and 10% for testing. Six different YOLO object detectors were trained and tested for weed detection. Among these models, YOLOv8 stands out, achieving a precision of 93.8%, a recall of 86.5%, and a [email protected] detection accuracy of 92.1%. With an object detection speed of 18 FPS and the advantages of the ByteTrack integrated object tracking algorithm, YOLOv8 was selected as the most suitable model. Additionally, the YOLOv8-ByteTrack model, developed for weed detection, was deployed on an agricultural robot with autonomous driving capabilities integrated with ROS. This system facilitates real-time weed detection and destruction, enhancing the efficiency of weed management in agricultural practices. Full article
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)
Show Figures

Figure 1

Figure 1
<p>Machine vision-based weeding robots: (<b>a</b>) the Bonirob, (<b>b</b>) the ARA, (<b>c</b>) the AVO, (<b>d</b>) the Laserweeder.</p>
Full article ">Figure 2
<p>Overview of the autonomous agricultural robot.</p>
Full article ">Figure 3
<p>Block diagram of the autonomous agricultural robot.</p>
Full article ">Figure 4
<p>Position of the autonomous agricultural robot.</p>
Full article ">Figure 5
<p>Flowchart of autonomous navigation part.</p>
Full article ">Figure 6
<p>YOLOv5 architecture [<a href="#B49-machines-13-00219" class="html-bibr">49</a>].</p>
Full article ">Figure 7
<p>YOLOv8 architecture [<a href="#B49-machines-13-00219" class="html-bibr">49</a>].</p>
Full article ">Figure 8
<p>ByteTrack workflow [<a href="#B55-machines-13-00219" class="html-bibr">55</a>].</p>
Full article ">Figure 9
<p>Types of weeds: (<b>a</b>) Dandelion Weeds, (<b>b</b>) Heliotropium indicum, (<b>c</b>) Young field Thistle Cirsium arvense, (<b>d</b>) Cirsium arvense, (<b>e</b>) Plantago lanceolata, (<b>f</b>) Eclipta, (<b>g</b>) Urtica Diocia.</p>
Full article ">Figure 10
<p>Results for the YOLOv5 model on image.</p>
Full article ">Figure 11
<p>(<b>a</b>) Results of the YOLOv5 Pruned and Quantized with Transfer Learning, (<b>b</b>) Result of the YOLOv5 Pruned and Quantized.</p>
Full article ">Figure 12
<p>Performance curves of YOLOv5: (<b>a</b>) Metrics/precision curves, (<b>b</b>) Metrics/recall curves.</p>
Full article ">Figure 13
<p>Performance curves of YOLOv5: (<b>a</b>) Metrics/mAP@0.5, (<b>b</b>) metrics/mAP@0.5:0.95.</p>
Full article ">Figure 14
<p>Performance results of YOLOv8.</p>
Full article ">
22 pages, 516 KiB  
Systematic Review
Positron Emission Tomography–Magnetic Resonance Imaging, a New Hybrid Imaging Modality for Dentomaxillofacial Malignancies—A Systematic Review
by Anastasia Mitsea, Nikolaos Christoloukas, Spyridoula Koutsipetsidou, Periklis Papavasileiou, Georgia Oikonomou and Christos Angelopoulos
Diagnostics 2025, 15(6), 654; https://doi.org/10.3390/diagnostics15060654 (registering DOI) - 7 Mar 2025
Abstract
Background/Objectives: Emerging hybrid imaging modalities, like Positron Emission Tomography/Computed Tomography (PET/CT) and Positron Emission Tomography/Magnetic Resonance Imaging (PET/MRI), are useful for assessing head and neck cancer (HNC) and its prognosis during follow-up. PET/MRI systems enable simultaneous PET and MRI scans within a [...] Read more.
Background/Objectives: Emerging hybrid imaging modalities, like Positron Emission Tomography/Computed Tomography (PET/CT) and Positron Emission Tomography/Magnetic Resonance Imaging (PET/MRI), are useful for assessing head and neck cancer (HNC) and its prognosis during follow-up. PET/MRI systems enable simultaneous PET and MRI scans within a single session. These combined PET/MRI scanners merge MRI’s better soft tissue contrast and the molecular metabolic information offered by PET. Aim: To review scientific articles on the use of hybrid PET/MRI techniques in diagnosing dentomaxillofacial malignancies. Method: The available literature on the use of PET/MRI for the diagnosis of dentomaxillofacial malignancies in four online databases (Scopus, PubMed, Web of Science, and the Cochrane Library) was searched. Eligible for this review were original full-text articles on PET/MRI imaging, published between January 2010 and November 2024, based on experimental or clinical research involving humans. Results: Out of the 783 articles retrieved, only twelve articles were included in this systematic review. Nearly half of the articles (5 out of 12) concluded that PET/MRI is superior to PET, MRI, and PET/CT imaging in relation to defining malignancies’ size. Six articles found no statistically significant results and the diagnostic accuracy presented was similar in PET/MRI versus MRI and PET/CT images. Regarding the overall risk of bias, most articles had a moderate risk. Conclusions: The use of PET/MRI in HNC cases provides a more accurate diagnosis regarding dimensions of the tumor and thus a more accurate surgical approach if needed. Further prospective studies on a larger cohort of patients are required to obtain more accurate results on the application of hybrid PET/MRI. Full article
(This article belongs to the Special Issue Advances in Dental Imaging, Oral Diagnosis, and Forensic Dentistry)
Show Figures

Figure 1

Figure 1
<p>PRISMA flow diagram.</p>
Full article ">
21 pages, 1178 KiB  
Article
User Behavior on Value Co-Creation in Human–Computer Interaction: A Meta-Analysis and Research Synthesis
by Xiaohong Chen and Yuan Zhou
Electronics 2025, 14(6), 1071; https://doi.org/10.3390/electronics14061071 (registering DOI) - 7 Mar 2025
Abstract
Value co-creation in online communities refers to a process in which all participants within a platform’s ecosystem exchange and integrate resources while engaging in mutually beneficial interactive processes to generate perceived value-in-use. User behavior plays a crucial role in influencing value co-creation in [...] Read more.
Value co-creation in online communities refers to a process in which all participants within a platform’s ecosystem exchange and integrate resources while engaging in mutually beneficial interactive processes to generate perceived value-in-use. User behavior plays a crucial role in influencing value co-creation in human–computer interaction. However, existing research contains controversies, and there is a lack of comprehensive studies exploring which factors of user behavior influence it and the mechanisms through which they operate. This paper employs meta-analysis to examine the factors and mechanisms based on 42 studies from 2006 to 2023 with a sample size of 30,016. It examines the relationships at the individual, interaction, and environment layers and explores moderating effects through subgroup analysis. The results reveal a positive overall effect between user behavior and value co-creation performance. Factors including self-efficacy, social identity, enjoyment, and belonging (individual layer); information support, social interaction, trust, and reciprocity (interaction layer); as well as shared values, incentives, community culture, and subjective norms (environment layer) positively influence value co-creation. The moderating effect of situational and measurement factors indicates that Chinese communities and monocultural environments have more significant effects than international and multicultural ones, while community type is not significant. Structural equation models and subjective collaboration willingness have a stronger moderating effect than linear regression and objective behavior, which constitutes a counterintuitive finding. This study enhances theoretical research on user behavior and provides insights for managing value co-creation in human–computer interaction. Full article
Show Figures

Figure 1

Figure 1
<p>Data Collection and Sample Selection.</p>
Full article ">Figure 2
<p>Overall Effect on Value Co-creation Performance.</p>
Full article ">Figure 3
<p>Individual Layer on Value Co-creation Performance.</p>
Full article ">Figure 4
<p>Interaction Layer on Value Co-creation Performance.</p>
Full article ">Figure 5
<p>Environment Layer on Value Co-creation Performance.</p>
Full article ">
18 pages, 5239 KiB  
Article
A Facile Two-Step High-Throughput Screening Strategy of Advanced MOFs for Separating Argon from Air
by Xiaoyi Xu, Bingru Xin, Zhongde Dai, Chong Liu, Li Zhou, Xu Ji and Yiyang Dai
Nanomaterials 2025, 15(6), 412; https://doi.org/10.3390/nano15060412 (registering DOI) - 7 Mar 2025
Abstract
Metal–organic frameworks (MOFs) based on the pressure swing adsorption (PSA) process show great promise in separating argon from air. As research burgeons, the number of MOFs has grown exponentially, rendering the experimental identification of materials with significant gas separation potential impractical. This study [...] Read more.
Metal–organic frameworks (MOFs) based on the pressure swing adsorption (PSA) process show great promise in separating argon from air. As research burgeons, the number of MOFs has grown exponentially, rendering the experimental identification of materials with significant gas separation potential impractical. This study introduced a high-throughput screening through a two-step strategy based on structure–property relationships, which leveraged Grand Canonical Monte Carlo (GCMC) simulations, to swiftly and precisely identify high-performance MOF adsorbents capable of separating argon from air among a vast array of MOFs. Compared to traditional approaches for material development and screening, this method significantly reduced both experimental and computational resource requirements. This research pre-screened 12,020 experimental MOFs from a computationally ready experimental MOF (CoRE MOF) database down to 7328 and then selected 4083 promising candidates through structure–performance correlation. These MOFs underwent GCMC simulation assessments, showing superior adsorption performance to traditional molecular sieves. In addition, an in-depth discussion was conducted on the structural characteristics and metal atoms among the best-performing MOFs, as well as the effects of temperature, pressure, and real gas conditions on their adsorption properties. This work provides a new direction for synthesizing next-generation MOFs for efficient argon separation in labs, contributing to energy conservation and consumption reduction in the production of high-purity argon gas. Full article
(This article belongs to the Section Inorganic Materials and Metal-Organic Frameworks)
Show Figures

Figure 1

Figure 1
<p>Workflow for high-throughput screening of MOF adsorbents designed to separate argon from air.</p>
Full article ">Figure 2
<p>The structure–property relationship based on oxygen as the adsorbent; the structure–property relationship based on nitrogen as the adsorbent is described in <a href="#app1-nanomaterials-15-00412" class="html-app">Figure S6</a>. (<b>a</b>) LCD; (<b>b</b>) PLD; (<b>c</b>) density; (<b>d</b>) VSA; (<b>e</b>) GSA; (<b>f</b>) VF.</p>
Full article ">Figure 3
<p>t-SNE algorithm visualized the sampling points and the pre-screening database.</p>
Full article ">Figure 4
<p>The relationship between the working capacity and selectivity of MOF adsorbents within the optimal structural range. The yellow, green, and purple spheres separately refer to MOFs with top 10%, top 20%, and bottom 80% APSs for nitrogen or oxygen as the target adsorbate. The grey boxes refer to MOFs with high adsorption capabilities for nitrogen or oxygen. (<b>a</b>) The target adsorbate is nitrogen; (<b>b</b>) the target adsorbate is oxygen.</p>
Full article ">Figure 5
<p>The relationship between the APS and R% of MOF adsorbents. The purple spheres refer to MOFs with R% &gt; 80%, the yellow spheres refer to MOFs with R% &lt; 80% and the top 10% APSs, and the green spheres refer to MOFs with R% &lt; 80% and bottom 90% APSs for nitrogen or oxygen as the target adsorbate. (<b>a</b>) The target adsorbate is nitrogen; (<b>b</b>) the target adsorbate is oxygen.</p>
Full article ">Figure 6
<p>(<b>a</b>) Proportion of OMSs in top MOFs; (<b>b</b>) proportion of OMSs in MOFs of the same metal.</p>
Full article ">Figure 7
<p>Metal ligand types and numbers of all candidate MOFs.</p>
Full article ">Figure 8
<p>Comparison of adsorption properties of top MOFs and zeolites.</p>
Full article ">Figure 9
<p>The impact of temperature and desorption pressure on the adsorbent’s APSA and regenerability. With the changes in temperature: (<b>a</b>) the variation of the APSA; (<b>b</b>) the variation of the R% for nitrogen as the target adsorbate; (<b>c</b>) the variation of the R% for oxygen as the target adsorbate. With the changes in pressure: (<b>d</b>) the variation of the APSA; (<b>e</b>) the variation of the R% for nitrogen as the target adsorbate; (<b>f</b>) the variation of the R% for oxygen as the target adsorbate.</p>
Full article ">
18 pages, 949 KiB  
Article
Accelerating Pattern Recognition with a High-Precision Hardware Divider Using Binary Logarithms and Regional Error Corrections
by Dat Ngo, Suhun Ahn, Jeonghyeon Son and Bongsoon Kang
Electronics 2025, 14(6), 1066; https://doi.org/10.3390/electronics14061066 (registering DOI) - 7 Mar 2025
Abstract
Pattern recognition applications involve extensive arithmetic operations, including additions, multiplications, and divisions. When implemented on resource-constrained edge devices, these operations demand dedicated hardware, with division being the most complex. Conventional hardware dividers, however, incur substantial overhead in terms of resource consumption and latency. [...] Read more.
Pattern recognition applications involve extensive arithmetic operations, including additions, multiplications, and divisions. When implemented on resource-constrained edge devices, these operations demand dedicated hardware, with division being the most complex. Conventional hardware dividers, however, incur substantial overhead in terms of resource consumption and latency. To address these limitations, we employ binary logarithms with regional error correction to approximate division operations. By leveraging approximation errors at boundary regions to formulate logarithm and antilogarithm offsets, our approach effectively reduces hardware complexity while minimizing the inherent errors of binary logarithm-based division. Additionally, we propose a six-stage pipelined hardware architecture, synthesized and validated on a Zynq UltraScale+ FPGA platform. The implementation results demonstrate that the proposed divider outperforms conventional division methods in terms of resource utilization and power savings. Furthermore, its application in image dehazing and object detection highlights its potential for real-time, high-performance computing systems. Full article
(This article belongs to the Special Issue Biometrics and Pattern Recognition)
Show Figures

Figure 1

Figure 1
<p>Block diagram of binary logarithm-based division. The red-dashed blocks require approximation techniques that introduce errors into the quotient.</p>
Full article ">Figure 2
<p>Illustration of errors introduced by Mitchell’s algorithm. (<b>a</b>) Error resulting from the approximation <math display="inline"><semantics> <mrow> <msub> <mi>log</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>≈</mo> <mi>x</mi> </mrow> </semantics></math>. (<b>b</b>) Distribution of division errors when applying Mitchell’s algorithm.</p>
Full article ">Figure 3
<p>Comparison of methods improving upon Mitchell’s algorithm. (<b>a</b>) Approximation lines used in each method, with the region <math display="inline"><semantics> <mrow> <mn>0.8</mn> <mo>≤</mo> <mi>x</mi> <mo>≤</mo> <mn>0.9</mn> </mrow> </semantics></math> enlarged for better visualization. (<b>b</b>) Corresponding approximation errors.</p>
Full article ">Figure 4
<p>Approximation lines corresponding to different offset definitions. (<b>a</b>) <math display="inline"><semantics> <msub> <mo>Δ</mo> <mi>right</mi> </msub> </semantics></math>. (<b>b</b>) <math display="inline"><semantics> <msub> <mo>Δ</mo> <mi>center</mi> </msub> </semantics></math>. (<b>c</b>) <math display="inline"><semantics> <msub> <mo>Δ</mo> <mi>avg</mi> </msub> </semantics></math>. The fraction is divided into four regions, with an enlarged view of the third region for clarity.</p>
Full article ">Figure 5
<p>Approximation error analysis of the proposed method. (<b>a</b>) Comparison of errors among different methods. (<b>b</b>) Approximation errors of the proposed method for varying values of <span class="html-italic">M</span>.</p>
Full article ">Figure 6
<p>Hardware architecture of the proposed divider. REG, MSB, and LSB denote register, most significant bit, and least significant bit, respectively. The “…” symbol indicates that the data path for the divisor is identical to that of the dividend.</p>
Full article ">Figure 7
<p>YOLOv9 object detection results on aerial images under varying haze levels using IFDH. Yellow labels represent airplanes, and blue labels represent birds.</p>
Full article ">
43 pages, 1727 KiB  
Review
A Review of the Authentication Techniques for Internet of Things Devices in Smart Cities: Opportunities, Challenges, and Future Directions
by Ashwag Alotaibi, Huda Aldawghan and Ahmed Aljughaiman
Sensors 2025, 25(6), 1649; https://doi.org/10.3390/s25061649 (registering DOI) - 7 Mar 2025
Abstract
Smart cities have witnessed a transformation in urban living through the Internet of Things (IoT), which has improved connectedness, efficiency, and sustainability. However, the adoption of IoT devices presents significant security vulnerabilities, particularly in authentication. The specific limitations of IoT contexts, such as [...] Read more.
Smart cities have witnessed a transformation in urban living through the Internet of Things (IoT), which has improved connectedness, efficiency, and sustainability. However, the adoption of IoT devices presents significant security vulnerabilities, particularly in authentication. The specific limitations of IoT contexts, such as constrained computational resources, are frequently not adequately addressed by traditional authentication techniques. The existing methods of authentication used for IoT devices in smart cities are critically examined in this review study. We evaluate the advantages and disadvantages of each mechanism, emphasizing real-world applicability. Additionally, we examine cutting-edge developments that offer improved security and scalability, such as blockchain technology, biometric authentication, and machine learning-based solutions. This study aims to identify gaps and propose future research directions to develop robust authentication frameworks that protect user privacy and data integrity. Full article
(This article belongs to the Special Issue Advanced IoT Systems in Smart Cities: 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>IoT authentication attacks in smart cities.</p>
Full article ">Figure 2
<p>Types of MitM in IoT authentication.</p>
Full article ">Figure 3
<p>IoT device attack strategies.</p>
Full article ">Figure 4
<p>Firmware exploits.</p>
Full article ">Figure 5
<p>Techniques used in sensor manipulation attacks.</p>
Full article ">Figure 6
<p>Mitigate sensor manipulation attacks.</p>
Full article ">Figure 7
<p>Types of data attacks.</p>
Full article ">Figure 8
<p>Authentication mechanism in smart cities.</p>
Full article ">Figure 9
<p>Types of authentication mechanisms.</p>
Full article ">Figure 10
<p>Examples of “something the user knows”.</p>
Full article ">Figure 11
<p>Examples of “something the user has”.</p>
Full article ">Figure 12
<p>Examples of “something the user is”.</p>
Full article ">Figure 13
<p>Applications of biometric authentication.</p>
Full article ">Figure 14
<p>Examples of “something the user does”.</p>
Full article ">Figure 15
<p>Applications of behavioral biometrics.</p>
Full article ">Figure 16
<p>Tools and techniques for IOT authentication in smart cities.</p>
Full article ">Figure 17
<p>Selection of papers for literature review using PRISMA.</p>
Full article ">
16 pages, 2001 KiB  
Review
Cryptographic Techniques in Artificial Intelligence Security: A Bibliometric Review
by Hamed Taherdoost, Tuan-Vinh Le and Khadija Slimani
Cryptography 2025, 9(1), 17; https://doi.org/10.3390/cryptography9010017 (registering DOI) - 7 Mar 2025
Abstract
With the rise in applications of artificial intelligence (AI) across various sectors, security concerns have become paramount. Traditional AI systems often lack robust security measures, making them vulnerable to adversarial attacks, data breaches, and privacy violations. Cryptography has emerged as a crucial component [...] Read more.
With the rise in applications of artificial intelligence (AI) across various sectors, security concerns have become paramount. Traditional AI systems often lack robust security measures, making them vulnerable to adversarial attacks, data breaches, and privacy violations. Cryptography has emerged as a crucial component in enhancing AI security by ensuring data confidentiality, authentication, and integrity. This paper presents a comprehensive bibliometric review to understand the intersection between cryptography, AI, and security. A total of 495 journal articles and reviews were identified using Scopus as the primary database. The results indicate a sharp increase in research interest between 2020 and January 2025, with a significant rise in publications in 2023 and 2024. The key application areas include computer science, engineering, and materials science. Key cryptographic techniques such as homomorphic encryption, secure multiparty computation, and quantum cryptography have gained prominence in AI security. Blockchain has also emerged as an essential technology for securing AI-driven applications, particularly in data integrity and secure transactions. This paper highlights the crucial role of cryptography in safeguarding AI systems and provides future research directions to strengthen AI security through advanced cryptographic solutions. Full article
Show Figures

Figure 1

Figure 1
<p>Number of documents included over last five years.</p>
Full article ">Figure 2
<p>Fields’ distribution of documents included.</p>
Full article ">Figure 3
<p>Countries of documents’ authors.</p>
Full article ">Figure 4
<p>Cluster of keywords (Created using <a href="http://vosviewer.com" target="_blank">vosviewer.com</a> accessed 5 January 2025).</p>
Full article ">
22 pages, 817 KiB  
Article
Clinical and Operational Applications of Artificial Intelligence and Machine Learning in Pharmacy: A Narrative Review of Real-World Applications
by Maree Donna Simpson and Haider Saddam Qasim
Pharmacy 2025, 13(2), 41; https://doi.org/10.3390/pharmacy13020041 (registering DOI) - 7 Mar 2025
Abstract
Over the past five years, the application of artificial intelligence (AI) including its significant subset, machine learning (ML), has significantly advanced pharmaceutical procedures in community pharmacies, hospital pharmacies, and pharmaceutical industry settings. Numerous notable healthcare institutions, such as Johns Hopkins University, Cleveland Clinic, [...] Read more.
Over the past five years, the application of artificial intelligence (AI) including its significant subset, machine learning (ML), has significantly advanced pharmaceutical procedures in community pharmacies, hospital pharmacies, and pharmaceutical industry settings. Numerous notable healthcare institutions, such as Johns Hopkins University, Cleveland Clinic, and Mayo Clinic, have demonstrated measurable advancements in the use of artificial intelligence in healthcare delivery. Community pharmacies have seen a 40% increase in drug adherence and a 55% reduction in missed prescription refills since implementing artificial intelligence (AI) technologies. According to reports, hospital implementations have reduced prescription distribution errors by up to 75% and enhanced the detection of adverse medication reactions by up to 65%. Numerous businesses, such as Atomwise and Insilico Medicine, assert that they have made noteworthy progress in the creation of AI-based medical therapies. Emerging technologies like federated learning and quantum computing have the potential to boost the prediction of protein–drug interactions by up to 300%, despite challenges including high implementation costs and regulatory compliance. The significance of upholding patient-centred care while encouraging technology innovation is emphasised in this review. Full article
(This article belongs to the Special Issue The AI Revolution in Pharmacy Practice and Education)
Show Figures

Figure 1

Figure 1
<p>Literature search approach.</p>
Full article ">
Back to TopTop