Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (832)

Search Parameters:
Keywords = ghosting

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 6474 KiB  
Article
Improved Lightweight YOLOv8 Model for Rice Disease Detection in Multi-Scale Scenarios
by Jinfeng Wang, Siyuan Ma, Zhentao Wang, Xinhua Ma, Chunhe Yang, Guoqing Chen and Yijia Wang
Agronomy 2025, 15(2), 445; https://doi.org/10.3390/agronomy15020445 - 11 Feb 2025
Viewed by 439
Abstract
In response to the challenges of detecting rice pests and diseases at different scales and the difficulties associated with deploying and running models on embedded devices with limited computational resources, this study proposes a multi-scale rice pest and disease recognition model (RGC-YOLO). Based [...] Read more.
In response to the challenges of detecting rice pests and diseases at different scales and the difficulties associated with deploying and running models on embedded devices with limited computational resources, this study proposes a multi-scale rice pest and disease recognition model (RGC-YOLO). Based on the YOLOv8n network, which includes an SPPF layer, the model introduces a structural reparameterization module (RepGhost) to achieve implicit feature reuse through reparameterization. GhostConv layers replace some standard convolutions, reducing the model’s computational cost and improving inference speed. A Hybrid Attention Module (CBAM) is incorporated into the backbone network to enhance the model’s ability to extract important features. The RGC-YOLO model is evaluated for accuracy and inference time on a multi-scale rice pest and disease dataset, including bacterial blight, rice blast, brown spot, and rice planthopper. Experimental results show that RGC-YOLO achieves a precision (P) of 86.2%, a recall (R) of 90.8%, and a mean average precision at Intersection over Union 0.5(mAP50) of 93.2%. In terms of model size, the parameters are reduced by 33.2%, and GFLOPs decrease by 29.27% compared to the base YOLOv8n model. Finally, the RGC-YOLO model is deployed on an embedded Jetson Nano device, where the inference time per image is reduced by 21.3% compared to the base YOLOv8n model, reaching 170 milliseconds. This study develops a multi-scale rice pest and disease recognition model, which is successfully deployed on embedded field devices, achieving high-accuracy real-time monitoring and providing valuable reference for intelligent equipment in unmanned farms. Full article
(This article belongs to the Section Pest and Disease Management)
Show Figures

Figure 1

Figure 1
<p>Original and data-augmented images of rice diseases and pests from the self-built dataset.</p>
Full article ">Figure 2
<p>Dataset Ground Truth Bounding Box information schematic. (<b>a</b>) Dataset Ground Truth Bounding Box dimension information; (<b>b</b>) Dataset Label Information.</p>
Full article ">Figure 3
<p>Different label Ground Truth Bounding Box size proportions. (<b>a</b>) The proportion diagram of the Ground Truth Bounding Box size of the four diseases and insect pests; (<b>b</b>) The proportion diagram of the Ground Truth Bounding Box size of rice planthoppers.</p>
Full article ">Figure 4
<p>The improved YOLOv8n network structure. Note: Conv represents ordinary convolution, GhostConv is ghost convolution, C2f RepGhost is an improved reparameterized module, CBAM is a hybrid attention mechanism module, SPPF is a spatial pyramid pool structure, Upsample is upsampling, and concat is tensor connection. MaxPool2d is a maximum pooling operation; RepGhostModule is a heavily parameterized module.</p>
Full article ">Figure 5
<p>Generating sketch map of feature map: (<b>a</b>) Conv feature map generation schematic diagram; (<b>b</b>) Ghostconv feature map generation schematic diagram.</p>
Full article ">Figure 6
<p>Internal structure of the RepGhost module and its improvements over the Ghost module. Note: cv (conv) is an ordinary convolution, ReLU is an activation function, concat is a tensor join, dconv is a deeply separated convolution, add is an add operation, SBlock: shortcut block, DS: Undersampling layer, SE: Squeenze and Excitation modules. RG-bneck: RepGhost bottleneck. Dashed blocks are inserted only when necessary. Cinand Cout represents the input and output channels of the bottleneck, respectively.</p>
Full article ">Figure 7
<p>CBAM attention module structure.</p>
Full article ">Figure 8
<p>Illustration of prediction results on the test set by different models. Note: The red rectangle box, pink rectangle box, orange rectangle box, and yellow rectangle box in the figure are model prediction boxes, the yellow circle box is missed mark, and the blue rectangle box is false check mark.</p>
Full article ">Figure 9
<p>Heatmap of Image Feature Extraction Results by Different Models. Note: In the heatmap, the red areas show where the model focuses the most, indicating a strong contribution to detection. The yellow areas represent regions with less attention, while the blue areas reflect minimal impact on target detection, marking them as redundant information.</p>
Full article ">Figure 10
<p>Real-time detection system and schematic diagram of detection results. Note: The red rectangular area represents the position of the hardware camera and NVIDIA Jetson Nano in the overall schematic diagram; The yellow rectangular box represents the real-time detection information output by the real-time monitoring system, which includes the following content: (camera number, image size, detected disease type, real-time detection time for a single image).</p>
Full article ">
12 pages, 4282 KiB  
Article
Simplifying the Diagnosis of Pediatric Nystagmus with Fundus Photography
by Noa Cohen-Sinai, Inbal Man Peles, Basel Obied, Noa Netzer, Noa Hadar, Alon Zahavi and Nitza Goldenberg-Cohen
Children 2025, 12(2), 211; https://doi.org/10.3390/children12020211 - 11 Feb 2025
Viewed by 320
Abstract
Background/Objectives: To simplify diagnosing congenital and acquired nystagmus using fundus photographs. Methods: A retrospective study included patients with congenital or childhood-acquired nystagmus examined at a hospital-based ophthalmology clinic (September 2020–September 2023) with fundus photos taken. Exclusions were for incomplete data or low-quality images. [...] Read more.
Background/Objectives: To simplify diagnosing congenital and acquired nystagmus using fundus photographs. Methods: A retrospective study included patients with congenital or childhood-acquired nystagmus examined at a hospital-based ophthalmology clinic (September 2020–September 2023) with fundus photos taken. Exclusions were for incomplete data or low-quality images. Demographics, aetiology, orthoptic measurements, and ophthalmologic and neurological exams were reviewed. Two independent physicians graded fundus photos based on amplitude (distance between “ghost” images), the number of images visible, and the direction of nystagmus. Severity was rated on a 0–3 scale using qualitative and quantitative methods. Photographic findings were compared to clinical data, and statistical analysis used Mann-Whitney tests. Results: A total of 53 eyes from 29 patients (16 females, 13 males; mean age 12.5 years, range 3–65) were studied: 25 with binocular nystagmus and 3 with monocular nystagmus. Diagnoses included congenital (n = 15), latent-manifest (n = 3), neurologically associated (n = 2), and idiopathic (n = 9). Types observed were vertical (n = 5), horizontal (n = 23), rotatory (n = 10), and multidirectional (n = 15). Visual acuity ranged from 20/20 to no light perception. Fundus photos correlated with clinical diagnoses, aiding qualitative assessment of direction and amplitude and mitigating eye movement effects for clearer retinal detail visualization. Conclusions: Fundus photography effectively captures nystagmus characteristics and retinal details, even in young children, despite continuous eye movements. Integrating fundus cameras into routine practice may enhance nystagmus diagnosis and management, improving patient outcomes. Full article
(This article belongs to the Section Pediatric Ophthalmology)
Show Figures

Figure 1

Figure 1
<p>Left eye fundus photography of a patient with congenital horizontal nystagmus, taken before and after a modified Kestenbaum procedure performed to correct abnormal head position. (<b>A</b>) A ghost image, indicating small-amplitude horizontal nystagmus. (<b>B</b>) A single image, indicating the resolution of the nystagmus, consistent with the clinical findings.</p>
Full article ">Figure 2
<p>Typical fundus photos showing no double image (<b>A</b>), grade 1 rotatory nystagmus (<b>B</b>), grade 2 horizontal nystagmus with a vertical component (<b>C</b>), grade 3 horizontal nystagmus (<b>D</b>), grade 3 horizontal nystagmus with a vertical component (<b>E</b>), and grade 3 vertical nystagmus (<b>F</b>).</p>
Full article ">Figure 3
<p>Fundus photos of Case 1 (<b>A</b>,<b>B</b>) showing double images with small-to-medium amplitude and both horizontal and vertical components in both eyes, consistent with clinical findings. The macula was normal. Blinded grading of the photographs yielded scores of 2 for the right eye and 3 for the left eye. Case 2 (<b>C</b>,<b>D</b>). 3C demonstrated a normal fundus with a single image. The left eye in image 3D displayed a normal fundus but duplicated photos, suggesting a small amplitude horizontal nystagmus, which corresponds to the clinical finding of latent nystagmus when the fellow eye is occluded.</p>
Full article ">
23 pages, 5243 KiB  
Article
GS-YOLO: A Lightweight Identification Model for Precision Parts
by Haojie Zhu, Lei Dong, Hanpeng Ren, Hongchao Zhuang and Hu Li
Symmetry 2025, 17(2), 268; https://doi.org/10.3390/sym17020268 - 10 Feb 2025
Viewed by 288
Abstract
With the development of aerospace technology, the variety and complexity of spacecraft components have increased. Traditional manual and machine learning-based detection methods struggle to accurately and quickly identify these parts. Deep learning-based object detection networks require significant computational resources and high hardware requirements. [...] Read more.
With the development of aerospace technology, the variety and complexity of spacecraft components have increased. Traditional manual and machine learning-based detection methods struggle to accurately and quickly identify these parts. Deep learning-based object detection networks require significant computational resources and high hardware requirements. This study introduces Ghost SCYLLA Intersection over Union You Only Look Once (GS-YOLO), an improved image recognition model derived from YOLOv5s, which integrates the global attention mechanism (GAM) with the Ghost module. The lightweight Ghost module substitutes the original convolutional layers, producing half of the features via convolution and the other half by symmetric linear operations. This minimizes the computing burden and model parameters by effectively acquiring superfluous feature layers. A more lightweight SimSPPF structure is created to supplant the old spatial pyramid pooling—fast (SPPF), enhancing the network speed. The GAM is included in the bottleneck architecture, improving feature extraction via channel–space interaction. The experimental results on the custom-made precision component dataset show that GS-YOLO achieves an accuracy of 96.5% with a model size of 10.8 MB. Compared to YOLOv5s, GS-YOLO improves accuracy by 1%, reduces parameters by 23%, and decreases computational requirements by 40.6%. Despite the model’s light weight, its detection accuracy has been improved. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

Figure 1
<p>GS-YOLO network model structure.</p>
Full article ">Figure 2
<p>CBS and GhostCBS structure diagram.</p>
Full article ">Figure 3
<p>C3-1/2 structure diagram.</p>
Full article ">Figure 4
<p>SimSPPF structure diagram.</p>
Full article ">Figure 5
<p>Schematic diagram of the Ghost convolution module.</p>
Full article ">Figure 6
<p>Ghost bottleneck structure module.</p>
Full article ">Figure 7
<p>Global attention.</p>
Full article ">Figure 8
<p>Channel attention module.</p>
Full article ">Figure 9
<p>Spatial attention module.</p>
Full article ">Figure 10
<p>Effect diagram of data augmentation.</p>
Full article ">Figure 11
<p>Aerospace precision part diagram: (<b>a</b>) vertical tower; (<b>b</b>) connector B; (<b>c</b>) connector A; (<b>d</b>) connector; and (<b>e</b>) vibrator.</p>
Full article ">Figure 12
<p>SIoU vs. CIoU.</p>
Full article ">Figure 13
<p>Color difference detection effects: (<b>a</b>) YOLOv5 test results and (<b>b</b>) GS-YOLO test results.</p>
Full article ">Figure 14
<p>Different angle inspection effect: (<b>a</b>) YOLOv5 test results and (<b>b</b>) GS-YOLO test results.</p>
Full article ">Figure 15
<p>Multi-target detection effect: (<b>a</b>) YOLOv5 test results and (<b>b</b>) GS-YOLO test results.</p>
Full article ">Figure 16
<p>Detection effect diagram of GS-YOLO.</p>
Full article ">
19 pages, 297 KiB  
Article
Ghosts in the Machine: Kafka and AI
by Imke Meyer
Humanities 2025, 14(2), 25; https://doi.org/10.3390/h14020025 - 6 Feb 2025
Viewed by 362
Abstract
The writings of Franz Kafka open, perhaps precisely because of their temporal distance to our present, a unique window onto the nexus of power, material, and the human that constitutes AI today. Anxiety and Unbehagen [discontent] are states of mind that often grip [...] Read more.
The writings of Franz Kafka open, perhaps precisely because of their temporal distance to our present, a unique window onto the nexus of power, material, and the human that constitutes AI today. Anxiety and Unbehagen [discontent] are states of mind that often grip both Kafka and his characters in an early-20th-century world increasingly dependent upon and perceived through the lens of disembodied communication and technology. But can we draw a line from Kafka’s reflections on analog media to the digital media that have come to dominate our lives in the 21st century, and whose effects are felt on a planetary scale? The short answer is “yes”. In Kafka’s analog world of technological horrors, glitches in the machinic administration of human life turn out to be not bugs, but rather features of the system; precisely the arbitrary effects that accompany the rigid implementation of rules and the slippages that occur during their merciless application enhance the power of the system as a whole. Kafka’s apparatuses and bureaucratic systems, in their powerful and toxic confluence of regularity and opacity, systematicity and arbitrariness, foreshadow the effects of AI upon our embodied existence in the 21st century. Full article
(This article belongs to the Special Issue Franz Kafka in the Age of Artificial Intelligence)
15 pages, 3571 KiB  
Article
Lightweight UAV Landing Model Based on Visual Positioning
by Ning Zhang, Junnan Tan, Kaichun Yan and Sang Feng
Sensors 2025, 25(3), 884; https://doi.org/10.3390/s25030884 - 31 Jan 2025
Viewed by 431
Abstract
In order to enhance the precision of UAV (unmanned aerial vehicle) landings and realize the convenient and rapid deployment of the model to the mobile terminal, this study proposes a Land-YOLO lightweight UAV-guided landing algorithm based on the YOLOv8 n model. Firstly, GhostConv [...] Read more.
In order to enhance the precision of UAV (unmanned aerial vehicle) landings and realize the convenient and rapid deployment of the model to the mobile terminal, this study proposes a Land-YOLO lightweight UAV-guided landing algorithm based on the YOLOv8 n model. Firstly, GhostConv replaces standard convolutions in the backbone network, leveraging existing feature maps to create additional “ghost” feature maps via low-cost linear transformations, thereby lightening the network structure. Additionally, the CSP structure of the neck network is enhanced by incorporating the PartialConv structure. This integration allows for the transmission of certain channel characteristics through identity mapping, effectively reducing both the number of parameters and the computational load of the model. Finally, the bidirectional feature pyramid network (BiFPN) module is introduced, and the accuracy and average accuracy of the model recognition landing mark are improved through the bidirectional feature fusion and weighted fusion mechanism. The experimental results show that for the landing-sign data sets collected in real and virtual environments, the Land-YOLO algorithm in this paper is 1.4% higher in precision and 0.91% higher in mAP0.5 than the original YOLOv8n baseline, which can meet the detection requirements of landing signs. The model’s memory usage and floating-point operations per second (FLOPs) have been reduced by 42.8% and 32.4%, respectively. This makes it more suitable for deployment on the mobile terminal of a UAV. Full article
Show Figures

Figure 1

Figure 1
<p>YOLOv8n network structure diagram.</p>
Full article ">Figure 2
<p>Land-YOLO network structure diagram.</p>
Full article ">Figure 3
<p>GhostConv structure diagram.</p>
Full article ">Figure 4
<p>Comparison of FPN and BiFPN structures.</p>
Full article ">Figure 5
<p>CSPPC structure.</p>
Full article ">Figure 6
<p>Conventional convolution and partial convolution comparison.</p>
Full article ">Figure 7
<p>Various landing markings.</p>
Full article ">Figure 8
<p>Landing annotation data set.</p>
Full article ">Figure 9
<p>GLOPS and Parameters.</p>
Full article ">Figure 10
<p>Land-YOLO indicators.</p>
Full article ">Figure 11
<p>Comparison of detection results before and after improvement.</p>
Full article ">Figure 12
<p>UAV simulation experiment.</p>
Full article ">
18 pages, 3106 KiB  
Article
An FPGA-Based Hybrid Overlapping Acceleration Architecture for Small-Target Remote Sensing Detection
by Nan Fang, Liyuan Li, Xiaoxuan Zhou, Wencong Zhang and Fansheng Chen
Remote Sens. 2025, 17(3), 494; https://doi.org/10.3390/rs17030494 - 31 Jan 2025
Viewed by 498
Abstract
Small-object detection in satellite remote sensing images plays a pivotal role in the field of remote sensing. Achieving high-performance real-time detection demands not only efficient algorithms but also low-power, high-performance hardware platforms. However, most mainstream target detection methods currently rely on graphics processing [...] Read more.
Small-object detection in satellite remote sensing images plays a pivotal role in the field of remote sensing. Achieving high-performance real-time detection demands not only efficient algorithms but also low-power, high-performance hardware platforms. However, most mainstream target detection methods currently rely on graphics processing units (GPUs) for acceleration, and the high power consumption of GPUs limits their use in resource-constrained platforms such as small satellites. Moreover, small-object detection faces multiple challenges: the targets occupy only a small number of pixels in the image, the background is often complex with significant noise interference, and existing detection models typically exhibit low accuracy when dealing with small targets. In addition, the large number of parameters in these models makes direct deployment on embedded devices difficult. To address these issues, we propose a hybrid overlapping acceleration architecture based on FPGA, along with a lightweight model derived from YOLOv5s that is specifically designed to enhance the detection of small objects in remote sensing images. This model incorporates a lightweight GhostBottleneckV2 module, significantly reducing both model parameters and computational complexity. Experimental results on the TIFAD thermal infrared small-object dataset show that our approach achieves an average precision (mAP) of 67.8% while consuming an average power of only 2.8 W. The robustness of the proposed model is verified by the HRSID dataset. Combining real-time performance with high energy efficiency, this architecture is particularly well suited for on-board remote sensing image processing systems, where reliable and efficient small-object detection is paramount. Full article
Show Figures

Figure 1

Figure 1
<p>The architecture of YOLOv5s [<a href="#B23-remotesensing-17-00494" class="html-bibr">23</a>].</p>
Full article ">Figure 2
<p>Bottleneck structure diagram of GhostNetV2 [<a href="#B26-remotesensing-17-00494" class="html-bibr">26</a>]: (<b>a</b>) bottleneck with a step length of 1; (<b>b</b>) Bottleneck with a step length of 2; (<b>c</b>) DFC attention. The Ghost module and DFC attention operate as two parallel branches, each extracting information from a different perspective.</p>
Full article ">Figure 3
<p>GF-YOLO structure.</p>
Full article ">Figure 4
<p>Data flow diagram.</p>
Full article ">Figure 5
<p>GF-YOLO detection plot on the HRSID.</p>
Full article ">Figure 6
<p>GF-YOLO detection plot on the TIFAD.</p>
Full article ">
18 pages, 3690 KiB  
Article
A Lightweight Dynamically Enhanced Network for Wildfire Smoke Detection in Transmission Line Channels
by Yu Zhang, Yangyang Jiao, Yinke Dou, Liangliang Zhao, Qiang Liu and Guangyu Zuo
Processes 2025, 13(2), 349; https://doi.org/10.3390/pr13020349 - 27 Jan 2025
Viewed by 591
Abstract
In view of the problems that mean that existing detection networks are not effective in detecting dynamic targets such as wildfire smoke, a lightweight dynamically enhanced transmission line channel wildfire smoke detection network LDENet is proposed. Firstly, a Dynamic Lightweight Conv Module (DLCM) [...] Read more.
In view of the problems that mean that existing detection networks are not effective in detecting dynamic targets such as wildfire smoke, a lightweight dynamically enhanced transmission line channel wildfire smoke detection network LDENet is proposed. Firstly, a Dynamic Lightweight Conv Module (DLCM) is devised within the backbone network of YOLOv8 to enhance the perception of flames and smoke through dynamic convolution. Then, the Ghost Module is used to lightweight the model. DLCM reduces the number of model parameters and improves the accuracy of wildfire smoke detection. Then, the DySample upsampling operator is used in the upsampling part to make the image generation more accurate with very few parameters. Finally, in the course of the training process, the loss function is improved. EMASlideLoss is used to improve detection ability for small targets, and the Shape-IoU loss function is used to optimize the shape of wildfires and smoke. Experiments are conducted on wildfire and smoke datasets, and the final mAP50 is 86.6%, which is 1.5% higher than YOLOv8, and the number of parameters is decreased by 29.7%. The experimental findings demonstrate that LDENet is capable of effectively detecting wildfire smoke and ensuring the safety of transmission line corridors. Full article
(This article belongs to the Section Energy Systems)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the C2f module.</p>
Full article ">Figure 2
<p>Schematic diagram of dynamic convolution.</p>
Full article ">Figure 3
<p>Schematic diagram of the Ghost Module.</p>
Full article ">Figure 4
<p>Schematic diagram of the DLCM.</p>
Full article ">Figure 5
<p>Diagram of the LDENet.</p>
Full article ">Figure 6
<p>Experimental dataset.</p>
Full article ">Figure 7
<p>Experimental results for upsampling parameters.</p>
Full article ">Figure 8
<p>Comparison of typical algorithms. (<b>a</b>) Original image; (<b>b</b>) YOLOv8; (<b>c</b>) YOLO11; (<b>d</b>) LDENet.</p>
Full article ">Figure 9
<p>Comparison of typical algorithms. (<b>a</b>) Original image; (<b>b</b>) YOLOv8; (<b>c</b>) YOLO11; (<b>d</b>) LDENet.</p>
Full article ">Figure 10
<p>Comparison of heatmaps. (<b>a</b>) Original image; (<b>b</b>) YOLOv8; (<b>c</b>) LDENet.</p>
Full article ">
16 pages, 31353 KiB  
Article
Research on Textile Tiny Defective Targets Detection Method Based on YOLO-GCW
by Jun Chen, Yuan Xiao, Weiqian Li, Boshi Wang and Gangfeng Wang
Electronics 2025, 14(3), 480; https://doi.org/10.3390/electronics14030480 - 24 Jan 2025
Viewed by 481
Abstract
In the textile quality control system, textile defect detection occupies a central position. In order to solve the problems of numerous model parameters, time-consuming computation, limited precision, and accuracy of tiny features of textile defects in the defect detection process, this paper proposes [...] Read more.
In the textile quality control system, textile defect detection occupies a central position. In order to solve the problems of numerous model parameters, time-consuming computation, limited precision, and accuracy of tiny features of textile defects in the defect detection process, this paper proposes a textile defect detection method based on the YOLO-GCW network model. First, in order to solve the problem of detection accuracy of tiny defective targets, the CBAM (Convolutional Block Attention Module) attention mechanism was incorporated to guide the model to focus more on the spatial localization information of the defects. Meanwhile, the WIoU (Weighted Intersection over Union) loss function was adopted to enhance model training as well as to improve the detection accuracy, which can also provide a more accurate measure of match between the model-predicted bounding box and the real target to improve the detection capability of tiny defect targets. Consequently, in view of the need for performance optimization and lightweight deployment, the Ghost convolution structure was adopted to replace the traditional convolution for compressing the model parameter scale and promoting the detection speed of complex texture features in textiles. Finally, numerous experiments proved the positive performance of the presented model and demonstrated its efficiency and effectiveness in various scenes. Full article
Show Figures

Figure 1

Figure 1
<p>Structure of the YOLO-GCW Model.</p>
Full article ">Figure 2
<p>Convolutional Block Attention Module.</p>
Full article ">Figure 3
<p>(<b>a</b>) CAM Module Structure; (<b>b</b>) SAM Module Structure.</p>
Full article ">Figure 4
<p>(<b>a</b>) Standard Convolution; (<b>b</b>) Ghost Convolution.</p>
Full article ">Figure 5
<p>Number of images in the textile defects dataset.</p>
Full article ">Figure 6
<p>FPS-mAP0.5 Scatter Diagram.</p>
Full article ">Figure 7
<p>Confusion matrix.</p>
Full article ">Figure 8
<p>(<b>a</b>) Recall comparison curves; (<b>b</b>) Precision comparison curve; (<b>c</b>) mAP0.5 comparison curve; (<b>d</b>) Loss comparison curve.</p>
Full article ">
20 pages, 5686 KiB  
Article
A VCG-Based Multiepitope Chlamydia Vaccine Incorporating the Cholera Toxin A1 Subunit (MECA) Confers Protective Immunity Against Transcervical Challenge
by Fnu Medhavi, Tayhlor Tanner, Shakyra Richardson, Stephanie Lundy, Yusuf Omosun and Francis O. Eko
Biomedicines 2025, 13(2), 288; https://doi.org/10.3390/biomedicines13020288 - 24 Jan 2025
Viewed by 549
Abstract
Background/Objectives: We generated a novel recombinant Vibrio cholerae ghost (rVCG)-based subunit vaccine incorporating the A1 subunit of cholera toxin (CTA1) and a multiepitope Chlamydia trachomatis (CT) antigen (MECA) derived from five chlamydial outer membrane proteins (rVCG-MECA). The ability of this vaccine to [...] Read more.
Background/Objectives: We generated a novel recombinant Vibrio cholerae ghost (rVCG)-based subunit vaccine incorporating the A1 subunit of cholera toxin (CTA1) and a multiepitope Chlamydia trachomatis (CT) antigen (MECA) derived from five chlamydial outer membrane proteins (rVCG-MECA). The ability of this vaccine to protect against a CT transcervical challenge was evaluated. Methods: Female C57BL/6J mice were immunized thrice at two-week intervals with rVCG-MECA or rVCG-gD2 (antigen control) via the intramuscular (IM) or intranasal (IN) route. PBS-immunized mice or mice immunized with live CT served as negative and positive controls, respectively. Results: Vaccine delivery stimulated robust humoral and cell-mediated immune effectors, characterized by local mucosal and systemic CT-specific IgG, IgG2c, and IgA antibody and IFN-γ (Th1 cytokine) responses. The elicited mucosal and systemic IgG2c and IgA antibody responses persisted for 16 weeks post-immunization. Immunization with rVCG-MECA afforded protection comparable to that provided by IN immunization with live CT EBs without any side effects, irrespective of route of vaccine delivery. Conclusions: The results underline the potential of a multiepitope vaccine as a promising resource for protecting against CT genital infection and the potential of CTA1 on the VCG platform as a mucosal and systemic adjuvant for developing CT vaccines. Full article
(This article belongs to the Section Microbiology in Human Health and Disease)
Show Figures

Figure 1

Figure 1
<p>Design and construction of the vaccine vector, pCT-MECA, and expression of rMECA. (<b>A</b>) Twenty (20) immunogenic T and B cell epitopes from five CT outer membrane proteins were selected and fused with CTA1 using linkers. (<b>B</b>) The 1980 bp synthesized coding sequence was inserted into the periplasmic targeting expression vector, pFLAG-CTS, in frame with the Flag Tag sequence to generate plasmid pCT-MECA. (<b>C</b>) Following transformation of <span class="html-italic">V. cholerae</span> V912 harboring the pDKLO1 lysis plasmid with plasmid pCT-MECA and production of rVCG-MECA, the expression of rMECA was confirmed by Western immunoblotting analysis of lyophilized rVCG-MECA samples using anti-Flag monoclonal antibodies. Lane 1—uninduced pCT-MECA control. Lane 2—rMECA 4 h post IPTG induction. MW—Molecular weight marker in kilodaltons (kDa).</p>
Full article ">Figure 2
<p>Schematic diagram of the experimental protocol outlining the immunization, sample collection (<b>A</b>), and challenge (<b>B</b>) schedules.</p>
Full article ">Figure 3
<p>CT-specific systemic and mucosal antibody responses elicited following immunization. Groups of mice were immunized thrice, at two-week intervals via the IM or IN route, as described in the Materials and Methods section. Serum obtained from blood and vaginal lavage samples were obtained two weeks after the first, second, and third immunizations. IgG, IgG2c, and IgA concentrations in serum and vaginal secretions were assessed by a standard antibody ELISA procedure, as described in the Materials and Methods section. The results, from three independent ELISA assays, were simultaneously generated with a standard curve and show data sets corresponding to absorbance values as mean concentrations (ng/mL) ± SD of triplicate cultures for each experiment. The data show the mean antibody concentrations elicited in serum (<b>A</b>,<b>C</b>,<b>E</b>) and vaginal wash (<b>B</b>,<b>D</b>,<b>F</b>) samples. Significant differences between groups were evaluated by one-way ANOVA with Tukey’s post-multiple comparison test at (** <span class="html-italic">p</span> &lt; 0.01, *** <span class="html-italic">p</span> &lt; 0.001, **** <span class="html-italic">p</span> &lt; 0.0001).</p>
Full article ">Figure 4
<p>Comparison of the Th1-associated IgG2c and Th2-associated IgG1 antibodies elicited in serum. The mice were immunized thrice, at two-week intervals via the IM or IN route, as described in the Materials and Methods section. Serum separated from blood samples obtained 2 weeks after the last immunization and pooled for each group. CT-specific IgG2a and IgG1 antibody concentrations were measured by a standard antibody ELISA procedure. The results generated simultaneously with antibody standards display the data sets as mean concentrations (ng/mL) + SD of triplicate cultures for each experiment. The data are from one of two independent assays with similar results and show (<b>A</b>) the mean IgG2a and IgG1 antibody concentrations and (<b>B</b>) the IgG2a/IgG1 ratios. Significant differences between experimental groups were evaluated by one-way ANOVA with Tukey’s post-multiple comparison test at (** <span class="html-italic">p</span> &lt; 0.01; **** <span class="html-italic">p</span> &lt; 0.0001).</p>
Full article ">Figure 5
<p>Long-term CT-specific antibody responses elicited after IM and IN vaccination. The mice were immunized as described above. Serum and vaginal secretions were obtained from each mouse per group at 12 and 16 weeks after the last immunization. The antibody levels were measured using a standard ELISA protocol. The results were simultaneously generated with antibody standards and presented the data sets as mean concentrations (ng/mL) + SD of triplicate cultures for each experiment. The data show the individual (distinct colored circles) and mean (represented by each column) IgG2c and IgA antibody concentrations elicited in serum (<b>A</b>,<b>C</b>) and vaginal wash (<b>B</b>,<b>D</b>) samples. Significant differences between experimental groups were evaluated by one-way ANOVA with Tukey’s post-multiple comparison test at (**** <span class="html-italic">p</span> &lt; 0.0001).</p>
Full article ">Figure 6
<p>Relative Avidity Index of CT-specific serum IgG and IgG2c antibodies after IM and IN immunization. The avidity of CT-specific IgG and IgG2c antibodies in serum samples from each immunization group was evaluated at 2 and 4 weeks post-immunization using a modified ELISA assay incorporating the chaotropic agent ammonium thiocyanate (NH<sub>4</sub>SCN). The results were generated simultaneously with a standard curve and data sets, corresponding to absorbance values that were calculated as mean concentrations(ng/mL) ± SD of triplicate cultures for each experiment. This experiment was repeated with similar results. The relative avidity index was calculated and displayed as a percentage of the ratio of the antibody concentration of samples treated with NH<sub>4</sub>SCN and the antibody concentration of untreated samples. The data show the percent Relative Avidity Index of serum IgG (<b>A</b>) and IgG2c (<b>B</b>) antibodies using 2 M NH<sub>4</sub>SCN. Significant differences between experimental groups were compared by one-way ANOVA with Tukey’s post-multiple comparison test at (*** <span class="html-italic">p</span> &lt; 0.001, **** <span class="html-italic">p</span> &lt; 0.0001).</p>
Full article ">Figure 7
<p>CT-specific mucosal and systemic Th1/Th2 cytokine responses. Immune T cells purified from the spleens and ILNs of immunized mice and controls 4 weeks post-immunization were restimulated in vitro with CT serovar D antigen (UV-irradiated EBs; 10 microgram/mL). The levels of CT-specific IFN-γ (Th1) and IL-4 (Th2) cytokines secreted in the supernatants of culture-stimulated CD4+ T cells were quantified using the Bio-Plex Cytokine Assay kit together with the Bio-Plex Manager software. Cytokine concentrations for each sample were determined by extrapolation from a concurrently generated standard calibration curve. The results are expressed as individual (distinct colored circles) and mean (represented by each column) values (±S.D.) based on quadruplicate cultures for each experiment. The results are from two independent experiments and are shown as mean IFN-g (<b>A</b>) and IL-4 (<b>B</b>) cytokine concentrations (pg/mL) ± SD. Significant differences between experimental groups were evaluated by one-way ANOVA with Tukey’s post-multiple comparison test at (* <span class="html-italic">p</span> &lt; 0.05, ** <span class="html-italic">p</span> &lt; 0.01, **** <span class="html-italic">p</span> &lt; 0.0001).</p>
Full article ">Figure 8
<p>Protection against transcervical challenge with CT serovar D. The mice were immunized IM or IN, as described above, and challenged transcervically with 1 × 10<sup>6</sup> IFU of live CT 4 weeks after the last immunization. One week before the challenge, the mice were administered Depo Provera to stabilize the estrous cycle and facilitate a productive infection. Infections were monitored by cervicovaginal swabbing of individual animals every three days for 30 days, and <span class="html-italic">Chlamydia</span> was isolated from swabs in tissue culture and enumerated. The data show the individual recoverable IFU/mL from each mouse (distinct colored circle) and the mean IFUs for each vaccine (represented by each column). The differences between vaccine groups were compared by two-way ANOVA with Tukey’s post-multiple comparison test at **** <span class="html-italic">p</span> &lt; 0.0001.</p>
Full article ">
27 pages, 12035 KiB  
Article
Numerical Study on Hydrodynamic Performance and Vortex Dynamics of Multiple Cylinders Under Forced Vibration at Low Reynolds Number
by Fulong Shi, Chuanzhong Ou, Jianjian Xin, Wenjie Li, Qiu Jin, Yu Tian and Wen Zhang
J. Mar. Sci. Eng. 2025, 13(2), 214; https://doi.org/10.3390/jmse13020214 - 23 Jan 2025
Viewed by 515
Abstract
Flow around clustered cylinders is widely encountered in engineering applications such as wind energy systems, pipeline transport, and marine engineering. To investigate the hydrodynamic performance and vortex dynamics of multiple cylinders under forced vibration at low Reynolds numbers, with a focus on understanding [...] Read more.
Flow around clustered cylinders is widely encountered in engineering applications such as wind energy systems, pipeline transport, and marine engineering. To investigate the hydrodynamic performance and vortex dynamics of multiple cylinders under forced vibration at low Reynolds numbers, with a focus on understanding the interference characteristics in various configurations, this study is based on a self-developed radial basis function iso-surface ghost cell computing platform, which improves the implicit iso-surface interface representation method to track the moving boundaries of multiple cylinders, and employs a self-constructed CPU/GPU heterogeneous parallel acceleration technique for efficient numerical simulations. This study systematically investigates the interference characteristics of multiple cylinder configurations across various parameter domains, including spacing ratios, geometric arrangements, and oscillation modes. A quantitative analysis of key parameters, such as aerodynamic coefficients, dimensionless frequency characteristics, and vorticity field evolution, is performed. This study reveals that, for a dual-cylinder system, there exists a critical gap ratio between X/D = 2.5 and 3, which leads to an increase in the lift and drag coefficients of both cylinders, a reduction in the vortex shedding periodicity, and a disruption of the wake structure. For a three-cylinder system, the lift and drag coefficients of the two upstream cylinders decrease with increasing spacing. On the other hand, this increased spacing results in a rise in the drag of the downstream cylinder. In the case of a four-cylinder system, the drag coefficients of the cylinders located on either side of the flow direction are relatively high. A significant increase in the lift coefficient occurs when the spacing ratio is less than 2.0, while the drag coefficient of the downstream cylinder is minimized. The findings establish a comprehensive theoretical framework for the optimal configuration design and structural optimization of multicylinder systems, while also providing practical guidelines for engineering applications. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of an immersed body in a Cartesian grid.</p>
Full article ">Figure 2
<p>Computation flowchart of virtual grid method based on CPU/GPU heterogeneous parallelism.</p>
Full article ">Figure 3
<p>Mesh generation for a single cylinder computational domain.</p>
Full article ">Figure 4
<p>Boundary condition diagram or schematic.</p>
Full article ">Figure 5
<p>Lift and drag coefficients: (<b>a</b>) Mean drag coefficient; (<b>b</b>) Amplitude of lift coefficient [<a href="#B49-jmse-13-00214" class="html-bibr">49</a>,<a href="#B50-jmse-13-00214" class="html-bibr">50</a>].</p>
Full article ">Figure 6
<p>Lock-in range of cylinder-induced forced vibration [<a href="#B51-jmse-13-00214" class="html-bibr">51</a>].</p>
Full article ">Figure 7
<p>The vorticity distribution over one vortex shedding cycle when <span class="html-italic">f<sub>0</sub></span>/<span class="html-italic">f<sub>s</sub></span> = 1.0.</p>
Full article ">Figure 7 Cont.
<p>The vorticity distribution over one vortex shedding cycle when <span class="html-italic">f<sub>0</sub></span>/<span class="html-italic">f<sub>s</sub></span> = 1.0.</p>
Full article ">Figure 8
<p>Arrangement of the dual cylinders.</p>
Full article ">Figure 9
<p>Lift and drag coefficients: (<b>a</b>) Mean drag coefficient; (<b>b</b>) Amplitude of the lift coefficient.</p>
Full article ">Figure 10
<p>Temporal variation in the drag and lift coefficient: (<b>a</b>,<b>c</b>,<b>e</b>) Drag coefficient; (<b>b</b>,<b>d</b>,<b>f</b>) Amplitude of the lift coefficient.</p>
Full article ">Figure 11
<p>Spectrum diagrams of lift coefficients for double cylinders: (<b>a</b>) <span class="html-italic">X</span>/<span class="html-italic">D</span> = 1.2; (<b>b</b>) <span class="html-italic">X</span>/<span class="html-italic">D</span> = 3.0; (<b>c</b>) <span class="html-italic">X</span>/<span class="html-italic">D</span> = 4.0.</p>
Full article ">Figure 12
<p>Vorticity distribution diagrams for the double cylinders: (<b>a</b>) <span class="html-italic">X</span>/<span class="html-italic">D</span> = 1.2; (<b>b</b>) <span class="html-italic">X</span>/<span class="html-italic">D</span> = 3.0; (<b>c</b>) <span class="html-italic">X</span>/<span class="html-italic">D</span> = 4.0.</p>
Full article ">Figure 13
<p>Arrangement of the three cylinders.</p>
Full article ">Figure 14
<p>Lift and drag coefficients at <span class="html-italic">X</span>/<span class="html-italic">D</span> = 3: (<b>a</b>) Mean drag coefficient; (<b>b</b>) Amplitude of the lift coefficient.</p>
Full article ">Figure 15
<p>Lift and drag coefficients at <span class="html-italic">X</span>/<span class="html-italic">D</span> = 2: (<b>a</b>) Mean drag coefficient; (<b>b</b>) Amplitude of the lift coefficient.</p>
Full article ">Figure 16
<p>Lift and drag coefficients at <span class="html-italic">X</span>/<span class="html-italic">D</span> = 4: (<b>a</b>) Mean drag coefficient; (<b>b</b>) Amplitude of the lift coefficient.</p>
Full article ">Figure 17
<p>Temporal variation in the drag and lift coefficient: (<b>a</b>,<b>c</b>,<b>e</b>) Drag coefficient; (<b>b</b>,<b>d</b>,<b>f</b>) Amplitude of the lift coefficient.</p>
Full article ">Figure 18
<p>Spectrum diagrams of the lift coefficients for the three cylinders at <span class="html-italic">X</span>/<span class="html-italic">D</span> = 3: (<b>a</b>) <span class="html-italic">Y</span>/<span class="html-italic">D</span> = 2.0; (<b>b</b>) <span class="html-italic">Y</span>/<span class="html-italic">D</span> = 3.0; (<b>c</b>) <span class="html-italic">Y</span>/<span class="html-italic">D</span> = 5.0.</p>
Full article ">Figure 19
<p>Vorticity distribution diagrams for the three cylinders at <span class="html-italic">X</span>/<span class="html-italic">D</span> = 3: (<b>a</b>) <span class="html-italic">Y</span>/<span class="html-italic">D</span> = 2.0; (<b>b</b>) <span class="html-italic">Y</span>/<span class="html-italic">D</span> = 3.0; (<b>c</b>) <span class="html-italic">Y</span>/<span class="html-italic">D</span> = 5.0.</p>
Full article ">Figure 20
<p>Arrangement of the four cylinders: (<b>a</b>) In-phase; (<b>b</b>) Anti-phase.</p>
Full article ">Figure 21
<p>Lift and drag coefficients of the four cylinders in phase: (<b>a</b>) Mean drag coefficient; (<b>b</b>) Amplitude of the lift coefficient.</p>
Full article ">Figure 22
<p>Temporal variation in the drag and lift coefficient: (<b>a</b>,<b>c</b>,<b>e</b>,<b>g</b>) Drag coefficient; (<b>b</b>,<b>d</b>,<b>f</b>,<b>h</b>) Amplitude of the lift coefficient.</p>
Full article ">Figure 22 Cont.
<p>Temporal variation in the drag and lift coefficient: (<b>a</b>,<b>c</b>,<b>e</b>,<b>g</b>) Drag coefficient; (<b>b</b>,<b>d</b>,<b>f</b>,<b>h</b>) Amplitude of the lift coefficient.</p>
Full article ">Figure 23
<p>Lift and drag coefficients of the four anti-phase cylinders: (<b>a</b>) Mean drag coefficient; (<b>b</b>) Amplitude of the lift coefficient.</p>
Full article ">Figure 24
<p>Temporal variation in the drag and lift coefficient: (<b>a</b>,<b>c</b>,<b>e</b>,<b>g</b>) Drag coefficient; (<b>b</b>,<b>d</b>,<b>f</b>,<b>h</b>) Amplitude of the lift coefficient.</p>
Full article ">Figure 24 Cont.
<p>Temporal variation in the drag and lift coefficient: (<b>a</b>,<b>c</b>,<b>e</b>,<b>g</b>) Drag coefficient; (<b>b</b>,<b>d</b>,<b>f</b>,<b>h</b>) Amplitude of the lift coefficient.</p>
Full article ">Figure 25
<p>Spectrum diagrams of the lift coefficients for four cylinders in phase: (<b>a</b>) <span class="html-italic">L</span>/<span class="html-italic">D</span> = 2.0; (<b>b</b>) <span class="html-italic">L</span>/<span class="html-italic">D</span> = 4.0.</p>
Full article ">Figure 25 Cont.
<p>Spectrum diagrams of the lift coefficients for four cylinders in phase: (<b>a</b>) <span class="html-italic">L</span>/<span class="html-italic">D</span> = 2.0; (<b>b</b>) <span class="html-italic">L</span>/<span class="html-italic">D</span> = 4.0.</p>
Full article ">Figure 26
<p>Spectrum diagrams of the lift coefficients for four anti-phase cylinders: (<b>a</b>) <span class="html-italic">L</span>/<span class="html-italic">D</span> = 2.0; (<b>b</b>) <span class="html-italic">L</span>/<span class="html-italic">D</span> = 4.0.</p>
Full article ">Figure 27
<p>Vorticity distribution diagrams for the four cylinders in phase: (<b>a</b>) <span class="html-italic">L</span>/<span class="html-italic">D</span> = 2.0; (<b>b</b>) <span class="html-italic">L</span>/<span class="html-italic">D</span> = 2.5; (<b>c</b>) <span class="html-italic">L</span>/<span class="html-italic">D</span> = 4.0.</p>
Full article ">Figure 28
<p>Vorticity distribution diagrams for four anti-phase cylinders: (<b>a</b>) <span class="html-italic">L</span>/<span class="html-italic">D</span> = 2.0; (<b>b</b>) <span class="html-italic">L</span>/<span class="html-italic">D</span> = 2.5; (<b>c</b>) <span class="html-italic">L</span>/<span class="html-italic">D</span> = 4.0.</p>
Full article ">
12 pages, 1373 KiB  
Review
Telling Ghost Stories Around a Bonfire—A Literature Review of Acute Bleeding Secondary to Pancreatitis
by Gabriele Bellio, Silvia Fattori, Andrea Sozzi, Matteo Maria Cimino and Hayato Kurihara
Medicina 2025, 61(1), 164; https://doi.org/10.3390/medicina61010164 - 20 Jan 2025
Viewed by 572
Abstract
Bleeding is a rare but serious complication of pancreatitis, significantly increasing morbidity and mortality. It can arise from various sources, including erosion of blood vessels by inflammatory processes, formation of pseudoaneurysms, and gastrointestinal bleeding. Early diagnosis and timely intervention are crucial for patient [...] Read more.
Bleeding is a rare but serious complication of pancreatitis, significantly increasing morbidity and mortality. It can arise from various sources, including erosion of blood vessels by inflammatory processes, formation of pseudoaneurysms, and gastrointestinal bleeding. Early diagnosis and timely intervention are crucial for patient survival. Imaging modalities such as computed tomography and angiography are essential for identifying the bleeding source, where endoscopy may help in detecting and treating intraluminal hemorrhage. Management strategies for patients with extraluminal bleeding may involve angioembolization or surgical intervention, depending on the severity and location of the bleeding. While advances in diagnostic and therapeutic techniques have improved outcomes, bleeding in pancreatitis remains a challenging clinical problem requiring a multidisciplinary approach. This review aims to focus its attention specifically on the bleeding complications of pancreatitis. Full article
(This article belongs to the Special Issue Diagnosis and Treatment of Acute Pancreatitis)
Show Figures

Figure 1

Figure 1
<p>The pathogenesis of bleeding secondary to pancreatitis. WON—walled-off necrosis. Reproduced with permission from Bellio, G., et al. [<a href="#B29-medicina-61-00164" class="html-bibr">29</a>].</p>
Full article ">Figure 2
<p>Contrast-enhanced abdominal computed tomography showing active bleeding inside a peripancreatic collection. Reproduced with permission from Bellio, G., et al. [<a href="#B29-medicina-61-00164" class="html-bibr">29</a>].</p>
Full article ">Figure 3
<p>Flowchart of the management of bleeding patients secondary to pancreatitis. CT—computed tomography; AE—angioembolization.</p>
Full article ">
20 pages, 11840 KiB  
Article
DBnet: A Lightweight Dual-Backbone Target Detection Model Based on Side-Scan Sonar Images
by Quanhong Ma, Shaohua Jin, Gang Bian, Yang Cui and Guoqing Liu
J. Mar. Sci. Eng. 2025, 13(1), 155; https://doi.org/10.3390/jmse13010155 - 17 Jan 2025
Viewed by 436
Abstract
Due to the large number of parameters and high computational complexity of current target detection models, it is challenging to perform fast and accurate target detection in side-scan sonar images under the existing technical conditions, especially in environments with limited computational resources. Moreover, [...] Read more.
Due to the large number of parameters and high computational complexity of current target detection models, it is challenging to perform fast and accurate target detection in side-scan sonar images under the existing technical conditions, especially in environments with limited computational resources. Moreover, since the original waterfall map of side-scan sonar only consists of echo intensity information, which is usually of a large size, it is difficult to fuse it with other multi-source information, which limits the detection accuracy of models. To address these issues, we designed DBnet, a lightweight target detector featuring two lightweight backbone networks (PP-LCNet and GhostNet) and a streamlined neck structure for feature extraction and fusion. To solve the problem of unbalanced aspect ratios in sonar data waterfall maps, DBnet employs the SAHI algorithm with sliding-window slicing inference to improve small-target detection accuracy. Compared with the baseline model, DBnet has 33% fewer parameters and 31% fewer GFLOPs while maintaining accuracy. Tests performed on two datasets (SSUTD and SCTD) showed that the mAP values improved by 2.3% and 6.6%. Full article
(This article belongs to the Special Issue New Advances in Marine Remote Sensing Applications)
Show Figures

Figure 1

Figure 1
<p>Operation flow chart.</p>
Full article ">Figure 2
<p>Diagram showing the DBnet model’s structure details.</p>
Full article ">Figure 3
<p>A schematic of the slices generated with SAHI in a sample. The colored dashed boxes indicate the four neighboring slices P1, P2, P3, and P4 corresponding to when X = 4, with a size of d × d pixels.</p>
Full article ">Figure 4
<p>Schematic of SAHI principle.The blue border is the whole image, the red border represents the corresponding slice, and the green border is the detection result.</p>
Full article ">Figure 5
<p>Schematic diagram of PP-LCNet’s structure. Conv is a standard 3 × 3 convolution. DepthSepConv denotes depth-separable convolution, where DW denotes depth-wise convolution and PW denotes point-wise convolution. Moreover, SE denotes the Squeeze-and-Excitation module.</p>
Full article ">Figure 6
<p>GhostConv operation principle.</p>
Full article ">Figure 7
<p>Ghost bottleneck.</p>
Full article ">Figure 8
<p>Selected samples from SSUTD and SCTD, both of which contain side-scan sonar images of airplane wrecks, shipwrecks, and drowned people.</p>
Full article ">Figure 9
<p>(<b>a</b>) The original image; (<b>b</b>–<b>f</b>) the data enhancement results.</p>
Full article ">Figure 10
<p>The distributions of targets and their sizes.</p>
Full article ">Figure 11
<p>The normalized confusion matrix of the model.</p>
Full article ">Figure 12
<p>mAP comparison curves of DBnet and baseline model.</p>
Full article ">Figure 13
<p>P-R curves of YOLOv8n and DBnet: (<b>a</b>) P-R curve of YOLOv8n detector; (<b>b</b>) P-R curve of DBnet detector.</p>
Full article ">Figure 14
<p>The orange arrows in the figure represent the slicing operation on the original large-size image, and the blue arrows represent the input of each slice into the DBnet detector for prediction.</p>
Full article ">Figure 15
<p>Graphs showing comparison of detection effects. (<b>a</b>) shows the results of detection of side-scan sonar images using the baseline model. (<b>b</b>) The result of using DBnet to detect the side-scan sonar image.</p>
Full article ">
13 pages, 3243 KiB  
Article
Genetically Engineered Bacterial Ghosts as Vaccine Candidates Against Klebsiella pneumoniae Infection
by Svetlana V. Dentovskaya, Anastasia S. Vagaiskaya, Alexandra S. Trunyakova, Alena S. Kartseva, Tatiana A. Ivashchenko, Vladimir N. Gerasimov, Mikhail E. Platonov, Victoria V. Firstova and Andrey P. Anisimov
Vaccines 2025, 13(1), 59; https://doi.org/10.3390/vaccines13010059 - 10 Jan 2025
Viewed by 737
Abstract
Background/Objectives Bacterial ghosts (BGs), non-living empty envelopes of bacteria, are produced either through genetic engineering or chemical treatment of bacteria, retaining the shape of their parent cells. BGs are considered vaccine candidates, promising delivery systems, and vaccine adjuvants. The practical use of BGs [...] Read more.
Background/Objectives Bacterial ghosts (BGs), non-living empty envelopes of bacteria, are produced either through genetic engineering or chemical treatment of bacteria, retaining the shape of their parent cells. BGs are considered vaccine candidates, promising delivery systems, and vaccine adjuvants. The practical use of BGs in vaccine development for humans is limited because of concerns about the preservation of viable bacteria in BGs. Methods: To increase the efficiency of Klebsiella pneumoniae BG formation and, accordingly, to ensure maximum killing of bacteria, we exploited previously designed plasmids with the lysis gene E from bacteriophage φX174 or with holin–endolysin systems of λ or L-413C phages. Previously, this kit made it possible to generate bacterial cells of Yersinia pestis with varying degrees of hydrolysis and variable protective activity. Results: In the current study, we showed that co-expression of the holin and endolysin genes from the L-413C phage elicited more rapid and efficient K. pneumoniae lysis than lysis mediated by only single gene E or the low functioning holin–endolysin system of λ phage. The introduction of alternative lysing factors into K. pneumoniae cells instead of the E protein leads to the loss of the murein skeleton. The resulting frameless cell envelops are more reminiscent of bacterial sacs or bacterial skins than BGs. Although such structures are less naive than classical bacterial ghosts, they provide effective protection against infection by a hypervirulent strain of K. pneumoniae and can be recommended as candidate vaccines. For our vaccine candidate generated using the O1:K2 hypervirulent K. pneumoniae strain, both safety and immunogenicity aspects were evaluated. Humoral and cellular immune responses were significantly increased in mice that were intraperitoneally immunized compared with subcutaneously vaccinated animals (p < 0.05). Conclusions: Therefore, this study presents novel perspectives for future research on K. pneumoniae ghost vaccines. Full article
(This article belongs to the Section Vaccines against Infectious Diseases)
Show Figures

Figure 1

Figure 1
<p>Preparation of <span class="html-italic">K. pneumoniae</span> KPI1627 BGs. Growth and lysis were monitored by measuring OD<sub>550</sub> (<b>A</b>) and the determination of the number of CFU (<b>B</b>). The data are presented as the mean ± s.d. of three samples.</p>
Full article ">Figure 2
<p>Transmission electron micrographs of <span class="html-italic">K. pneumoniae</span> strains: (<b>A</b>) KPI1627, (<b>B</b>) KPI1627/pEYR’-E, (<b>C</b>) KPI1627/pEYR’-S-R-Rz, (<b>D</b>) KPI1627/pEYR’-E-S-R-Rz, (<b>E</b>) KPI1627/pEYR’-Y-K, (<b>F</b>) KPI1627/pEYR’-E-Y-K. The bar represents 1 μm (<b>A</b>,<b>C</b>–<b>F</b>) or 500 nm (<b>B</b>).</p>
Full article ">Figure 3
<p>Antibody response in sera of mice immunized s.c. and i.p. with KPI-Y-K and PBS. #—<span class="html-italic">p</span> &gt; 0.05; *—<span class="html-italic">p</span> &lt; 0.05; **—<span class="html-italic">p</span> &lt; 0.005; ****—<span class="html-italic">p</span> &lt; 0.0001.</p>
Full article ">Figure 4
<p>Specific IFN-γ, IL-6, and TNF-α levels of splenic lymphocytes from immunized mice. * <span class="html-italic">p</span> &lt; 0.05 vs. PBS group.</p>
Full article ">Figure 5
<p>The expression levels of CD69 on CD3<sup>+</sup>CD4<sup>+</sup>, CD3<sup>+</sup>CD8<sup>+</sup>, and CD19<sup>+</sup> cell subsets of splenic lymphocytes from immunized mice. The splenic lymphocytes of mice were separated 28 days after the first immunization, and corresponding BGs were used as immunogens. Following a 48-h incubation period, lymphocytes were harvested and subjected to flow cytometry analysis. ** <span class="html-italic">p</span> &lt; 0.005 vs. PBS group. Graphs and histograms show the distribution of CD69 expression in the lymphocyte subsets.</p>
Full article ">Figure 6
<p>Protection of <span class="html-italic">KP</span>-BGs against a lethal challenge with the wild-type <span class="html-italic">K. pneumoniae</span> KPI1627 strain. Mice were subjected to i.p. and s.c. immunization with KPI-YK BGs at day 0 and boosted twice at 10 and 20 days. Ten days after the last immunization, 10 mice from each group were challenged i.p. with 10<sup>4</sup> CFUs of <span class="html-italic">K. pneumoniae</span> KPI1627 (5000 LD<sub>50</sub>). **** <span class="html-italic">p</span> &lt; 0.0001.</p>
Full article ">
17 pages, 7328 KiB  
Article
Mom Knows More than a Little Ghost: Children’s Attributions of Beliefs to God, the Living, and the Dead
by Dawoon Jung, Euisun Kim and Sung-Ho Kim
Religions 2025, 16(1), 68; https://doi.org/10.3390/rel16010068 - 10 Jan 2025
Viewed by 655
Abstract
The growing body of research on children’s understanding of extraordinary minds has demonstrated that children believe in the persistence of mental functioning after death. However, beyond the continuity of mind, the supernatural conception of death often involves the concept of the disembodied mind, [...] Read more.
The growing body of research on children’s understanding of extraordinary minds has demonstrated that children believe in the persistence of mental functioning after death. However, beyond the continuity of mind, the supernatural conception of death often involves the concept of the disembodied mind, which transcends the constraints of the physical body, possessing supernatural mental capacities. The current study investigated whether children differentiate between a dead agent’s mind and ordinary minds in terms of their perceptual and information-updating capacities. In a location-change false-belief task, which involved a story of a mouse protagonist that was either eaten by an alligator or not, 4- to 6-year-old Korean children (N = 114) were asked about the mental states of the protagonist, an ordinary adult (mom), and God. The results showed (1) older children’s tendency to respond in a way that differentiated (the living) mom from the dead protagonist, (2) an increasing trend of differentiating God’s super-knowingness from ordinary minds with age, and (3) inconclusive evidence regarding children’s differential responses to the dead versus living protagonist. This study suggests that children are not predisposed to view dead agents as possessing a disembodied and supernatural mind, highlighting the importance of cultural learning in the development of such religious concepts. Full article
(This article belongs to the Section Religions and Health/Psychology/Social Sciences)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Images of characters, (<b>b</b>,<b>c</b>) summary description of the scenario in the alive (<b>b</b>) and dead (<b>c</b>) conditions. For a complete description of the pictures and scripts, see <a href="#app1-religions-16-00068" class="html-app">Appendix A</a>.</p>
Full article ">Figure 2
<p>The probability of attributing FB to each agent based on model output under the alive (<b>a</b>) and dead (<b>b</b>) conditions. Shaded areas are 95% confidence intervals based on model estimates.</p>
Full article ">
22 pages, 18757 KiB  
Article
CSGD-YOLO: A Corn Seed Germination Status Detection Model Based on YOLOv8n
by Wenbin Sun, Meihan Xu, Kang Xu, Dongquan Chen, Jianhua Wang, Ranbing Yang, Quanquan Chen and Songmei Yang
Agronomy 2025, 15(1), 128; https://doi.org/10.3390/agronomy15010128 - 7 Jan 2025
Viewed by 483
Abstract
Seed quality testing is crucial for ensuring food security and stability. To accurately detect the germination status of corn seeds during the paper medium germination test, this study proposes a corn seed germination status detection model based on YOLO v8n (CSGD-YOLO). Initially, to [...] Read more.
Seed quality testing is crucial for ensuring food security and stability. To accurately detect the germination status of corn seeds during the paper medium germination test, this study proposes a corn seed germination status detection model based on YOLO v8n (CSGD-YOLO). Initially, to alleviate the complexity encountered in conventional models, a lightweight spatial pyramid pooling fast (L-SPPF) structure is engineered to enhance the representation of features. Simultaneously, a detection module dubbed Ghost_Detection, leveraging the GhostConv architecture, is devised to boost detection efficiency while simultaneously reducing parameter counts and computational overhead. Additionally, during the downsampling process of the backbone network, a downsampling module based on receptive field attention convolution (RFAConv) is designed to boost the model’s focus on areas of interest. This study further proposes a new module named C2f-UIB-iAFF based on the faster implementation of cross-stage partial bottleneck with two convolutions (C2f), universal inverted bottleneck (UIB), and iterative attention feature fusion (iAFF) to replace the original C2f in YOLOv8, streamlining model complexity and augmenting the feature fusion prowess of the residual structure. Experiments conducted on the collected corn seed germination dataset show that CSGD-YOLO requires only 1.91 M parameters and 5.21 G floating-point operations (FLOPs). The detection precision(P), recall(R), mAP0.5, and mAP0.50:0.95 achieved are 89.44%, 88.82%, 92.99%, and 80.38%. Compared with the YOLO v8n, CSGD-YOLO improves performance in terms of accuracy, model size, parameter number, and floating-point operation counts by 1.39, 1.43, 1.77, and 2.95 percentage points, respectively. Therefore, CSGD-YOLO outperforms existing mainstream target detection models in detection performance and model complexity, making it suitable for detecting corn seed germination status and providing a reference for rapid germination rate detection. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

Figure 1
<p>The platform for corn germination image collection.</p>
Full article ">Figure 2
<p>Different germination states of corn seed in germination test. (<b>a</b>) Examples of seed germination states. (<b>b</b>) Boundary box annotations for seed germination states.</p>
Full article ">Figure 3
<p>Examples of data enhancement.</p>
Full article ">Figure 4
<p>The Distribution of number of tags.</p>
Full article ">Figure 5
<p>The structures of YOLO v8n.</p>
Full article ">Figure 6
<p>The structures of CSGD-YOLO.</p>
Full article ">Figure 7
<p>The structures of SPPF and L-SPPF.</p>
Full article ">Figure 8
<p>The structures of the C2f-UIB-iAFF module.</p>
Full article ">Figure 9
<p>The structures of Ghost_Detection module.</p>
Full article ">Figure 10
<p>The structures of the Downsampling Convolutional Module.</p>
Full article ">Figure 11
<p>Train and test loss curves on different data.</p>
Full article ">Figure 12
<p>Metrics curves of YOLO v8n.</p>
Full article ">Figure 13
<p>Confusion matrix of the model test. (<b>a</b>) Confusion matrix of YOLO v8n (<b>b</b>) Confusion matrix of CSGD-YOLO.</p>
Full article ">
Back to TopTop