Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (316)

Search Parameters:
Keywords = moving target indication

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 4965 KiB  
Article
Development of a Short-Range Multispectral Camera Calibration Method for Geometric Image Correction and Health Assessment of Baby Crops in Greenhouses
by Sabina Laveglia, Giuseppe Altieri, Francesco Genovese, Attilio Matera, Luciano Scarano and Giovanni Carlo Di Renzo
Appl. Sci. 2025, 15(6), 2893; https://doi.org/10.3390/app15062893 - 7 Mar 2025
Viewed by 101
Abstract
Multispectral imaging plays a key role in crop monitoring. A major challenge, however, is spectral band misalignment, which can hinder accurate plant health assessment by distorting the calculation of vegetation indices. This study presents a novel approach for short-range calibration of a multispectral [...] Read more.
Multispectral imaging plays a key role in crop monitoring. A major challenge, however, is spectral band misalignment, which can hinder accurate plant health assessment by distorting the calculation of vegetation indices. This study presents a novel approach for short-range calibration of a multispectral camera, utilizing stereo vision for precise geometric correction of acquired images. By using multispectral camera lenses as binocular pairs, the sensor acquisition distance was estimated, and an alignment model was developed for distances ranging from 500 mm to 1500 mm. The approach relied on selecting the red band image as a reference, while the remaining bands were treated as moving images. The stereo camera calibration algorithm estimated the target distance, enabling the correction of band misalignment through previously developed models. The alignment models were applied to assess the health status of baby leaf crops (Lactuca sativa cv. Maverik) by analyzing spectral indices correlated with chlorophyll content. The results showed that the stereo vision approach used for distance estimation achieved high accuracy, with average reprojection errors of approximately 0.013 pixels (4.485 × 10−5 mm). Additionally, the proposed linear model was able to explain reasonably the effect of distance on alignment offsets. The overall performance of the proposed experimental alignment models was satisfactory, with offset errors on the bands less than 3 pixels. Despite the results being not yet sufficiently robust for a fully predictive model of chlorophyll content in plants, the analysis of vegetation indices demonstrated a clear distinction between healthy and unhealthy plants. Full article
(This article belongs to the Special Issue Advances in Automation and Controls of Agri-Food Systems)
Show Figures

Figure 1

Figure 1
<p>Location of the optical lens of the MicaSense RedEdge P sensor and relative pose of the spectral bands (B, G, NR, and RE) to the R band, with distance from the reference lens (cm).</p>
Full article ">Figure 2
<p>Averages of the x-axis and y-axis reprojection errors (pixels) for the stereo setups of the G, B, NR, and RE bands to the R band as a function of distance.</p>
Full article ">Figure 3
<p>Reprojection errors expressed as the average values along the X and Y components for a single distance, relative to the stereo pairs used in the calibration process.</p>
Full article ">Figure 4
<p>Results of the model interpolation on the experimental data, with the offsets of the B, G, NR, and RE bands relative to the R band, analyzed along the X-axis and Y-axis using the CB, FT, and RG methods.</p>
Full article ">Figure 5
<p>Raw images of R, G, and B bands of the multispectral sensor (MS) and corresponding aligned images using CB, FT, and RG methods on baby lettuce leaves.</p>
Full article ">Figure 6
<p>Correlation matrix plot of Pearson’s coefficient among chlorophyll content and different vegetation indices for the lettuce leaves.</p>
Full article ">Figure 7
<p>Box plot of chlorophyll content (μmol/cm<sup>2</sup>) and leaf area (cm<sup>2</sup>) of baby lettuce plants grown in good water conditions (healthy) and water deficit conditions (unhealthy) with significance (α = 0.05) with Tuckey’s letters.</p>
Full article ">Figure 8
<p>Selected vegetation indices (VIs) (GNDVI, SR, MCARI, NARI, and mARI) were applied to baby lettuce leaves as a result of image alignment on healthy and stressed plants.</p>
Full article ">
27 pages, 36300 KiB  
Article
Maritime Target Radar Detection and Tracking via DTNet Transfer Learning Using Multi-Frame Images
by Xiaoyang He, Xiaolong Chen, Xiaolin Du, Xinghai Wang, Shuwen Xu and Jian Guan
Remote Sens. 2025, 17(5), 836; https://doi.org/10.3390/rs17050836 - 27 Feb 2025
Viewed by 167
Abstract
Traditional detection and tracking methods struggle with the complex and dynamic maritime environment due to their poor generalization capabilities. To address this, this paper improves the YOLOv5 network by integrating Transformer and a Convolutional Block Attention Module (CBAM) with the multi-frame image information [...] Read more.
Traditional detection and tracking methods struggle with the complex and dynamic maritime environment due to their poor generalization capabilities. To address this, this paper improves the YOLOv5 network by integrating Transformer and a Convolutional Block Attention Module (CBAM) with the multi-frame image information obtained from radar scans. It proposes a detection and tracking method based on the Detection Tracking Network (DTNet), which leverages transfer learning and the DeepSORT tracking algorithm, enhancing the detection capabilities of the model across various maritime environments. First, radar echoes are preprocessed to create a dataset of Plan Position Indicator (PPI) images for different marine conditions. An integrated network for detecting and tracking maritime targets is then designed, utilizing the feature differences between moving targets and sea clutter, along with the coherence of inter-frame information for moving targets, to achieve multi-target detection and tracking. The proposed method was validated on real maritime targets, achieving a precision of 99.06%, which is a 7.36 percentage point improvement over the original YOLOv5, demonstrating superior detection and tracking performance. Additionally, the impact of maritime regions and weather conditions is discussed, showing that, when transferring from Region I to Regions II and III, the precision reached 92.2% and 89%, respectively, and, when facing rainy weather, although there was interference from the sea clutter and rain clutter, the precision was still able to reach 82.4%, indicating strong generalization capabilities compared to the original YOLOv5 network. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Image of the JRC radar.</p>
Full article ">Figure 2
<p>Data from different maritime areas. (<b>a</b>) Maritime Area I. (<b>b</b>) Maritime Area II. (<b>c</b>) Maritime Area III.</p>
Full article ">Figure 3
<p>The actual sea surface conditions under different weather scenarios. (<b>a</b>) Sunny day. (<b>b</b>) Rainy day. (<b>c</b>) Foggy day. (<b>d</b>) Windy day.</p>
Full article ">Figure 3 Cont.
<p>The actual sea surface conditions under different weather scenarios. (<b>a</b>) Sunny day. (<b>b</b>) Rainy day. (<b>c</b>) Foggy day. (<b>d</b>) Windy day.</p>
Full article ">Figure 4
<p>Actual images of the target vessels. (<b>a</b>) Fishing vessels. (<b>b</b>) Transport ship.</p>
Full article ">Figure 5
<p>Different weather data. (<b>a</b>) Sunny day. (<b>b</b>) Rainy day. (<b>c</b>) Foggy day. (<b>d</b>) Windy day.</p>
Full article ">Figure 5 Cont.
<p>Different weather data. (<b>a</b>) Sunny day. (<b>b</b>) Rainy day. (<b>c</b>) Foggy day. (<b>d</b>) Windy day.</p>
Full article ">Figure 6
<p>The overall flow of the DTNet network.</p>
Full article ">Figure 7
<p>Improved YOLOv5 network.</p>
Full article ">Figure 8
<p>Transformer module.</p>
Full article ">Figure 9
<p>Convolutional block attention module.</p>
Full article ">Figure 10
<p>Flowchart of the DeepSORT tracking algorithm.</p>
Full article ">Figure 11
<p>The DTNet transfer learning process.</p>
Full article ">Figure 12
<p>Relevant parameter curve. (<b>a</b>) Loss function curve. (<b>b</b>) Precision–recall curve.</p>
Full article ">Figure 13
<p>AIS and detection tracking results for Maritime Area I. (<b>a</b>) The AIS display of the targets in Maritime Area I. (<b>b</b>) Detection performance. (<b>c</b>) Tracking performance.</p>
Full article ">Figure 14
<p>Classic network comparison curve.</p>
Full article ">Figure 15
<p>Comparison of the effects with classical algorithm detection. (<b>a</b>) DTNet. (<b>b</b>) YOLOv5. (<b>c</b>) Faster R-CNN. (<b>d</b>) RetinaNet.</p>
Full article ">Figure 16
<p>Precision and loss curve for the transfer from Maritime Area I to Maritime Area II.</p>
Full article ">Figure 17
<p>The AIS and detection-tracking results of Maritime Area II. (<b>a</b>) The AIS display of the targets in Maritime Area II. (<b>b</b>) Detection results of the model transfer from Maritime Area I to Maritime Area II. (<b>c</b>) Tracking results of the model transfer from Maritime Area I to Maritime Area II.</p>
Full article ">Figure 18
<p>The precision and loss curve for transfer from Maritime Area I to Maritime Area III.</p>
Full article ">Figure 19
<p>The AIS and detection-tracking results of Maritime Area III. (<b>a</b>) The AIS display of the targets in Maritime Area III. (<b>b</b>) The detection results of the model transfer from Maritime Area I to Maritime Area III. (<b>c</b>) The tracking results of the model transfer from Maritime Area I to Maritime Area III.</p>
Full article ">Figure 20
<p>The precision and loss curve for transfer from Maritime Area I to heavy rain in Maritime Area I.</p>
Full article ">Figure 21
<p>The AIS and detection-tracking results of Maritime Area I during rainy weather. (<b>a</b>) The AIS display of the targets in Maritime Area I during heavy rain. (<b>b</b>) The detection results for transfer from Maritime Area I to heavy rain in Maritime Area I. (<b>c</b>) The tracking results for transfer from Maritime Area I to heavy rain in Maritime Area I.</p>
Full article ">
17 pages, 4982 KiB  
Article
ZPTM: Zigzag Path Tracking Method for Agricultural Vehicles Using Point Cloud Representation
by Shuang Yang, Engen Zhang, Yufei Liu, Juan Du and Xiang Yin
Sensors 2025, 25(4), 1110; https://doi.org/10.3390/s25041110 - 12 Feb 2025
Viewed by 475
Abstract
Automatic navigation, as one of the modern technologies in farming automation, enables unmanned driving and operation of agricultural vehicles. In this research, the ZPTM (Zigzag Path Tracking Method) was proposed to reduce the complexity of path planning by using a point cloud consisting [...] Read more.
Automatic navigation, as one of the modern technologies in farming automation, enables unmanned driving and operation of agricultural vehicles. In this research, the ZPTM (Zigzag Path Tracking Method) was proposed to reduce the complexity of path planning by using a point cloud consisting of a series of anchor points with spatial information, which are obtained from orthophotos taken by UAVs (Unmanned Aerial Vehicles) to represent the curved path in the zigzag. A local straight path was created by linking two adjacent anchor points, forming the local target path to be tracked, which simplified the navigation algorithm for zigzag path tracking. A nonlinear feedback function was established, using both lateral and heading errors as inputs for determining the desired heading angle of agricultural vehicles, which were guided along the local target path with minimal errors. A GUI (Graphic User Interface) was designed on the navigation terminal to visualize and monitor the working process of agricultural vehicles in automatic navigation, displaying interactive controls and components, including representations of the zigzag path and the agricultural vehicle using affine transformation. A high-clearance sprayer equipped with an automatic navigation system was utilized as the test platform to evaluate the proposed ZPTM. Zigzag navigation tests were conducted to explore the impact of path tracking parameters, including path curvature, moving speed, and spacing between anchor points, on zigzag navigation performance. Based on these tests, a regression model was established to optimize these parameters for achieving accurate and smooth movement. Field test results showed that the maximum error, average error, and RMS (Root Mean Square) error in the zigzag navigation were 3.30 cm, 2.04 cm, and 2.27 cm, respectively. These results indicate that the point cloud path-based ZPTM in this research demonstrates adequate stability, accuracy, and applicability in zigzag navigation. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Figure 1
<p>Components of test platform.</p>
Full article ">Figure 2
<p>The process of (<b>a</b>) image acquisition by DJI Phantom 4 and (<b>b</b>) zigzag path planning.</p>
Full article ">Figure 3
<p>Flow chart of DPA.</p>
Full article ">Figure 4
<p>An example of the DPA: (<b>a</b>) Original point-cloud path. (<b>b</b>–<b>d</b>) Process using DPA. (<b>e</b>) Result after applying DPA. (<b>f</b>) Comparison of original and simplified point-cloud path.</p>
Full article ">Figure 5
<p>Point cloud path tracking algorithm: <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mi>ω</mi> </mrow> <mrow> <mi>c</mi> <mo>−</mo> <mn>1</mn> </mrow> <mrow> <mo>∗</mo> </mrow> </msubsup> <mo>,</mo> <mo> </mo> <msubsup> <mrow> <mi>ω</mi> </mrow> <mrow> <mi>c</mi> </mrow> <mrow> <mo>∗</mo> </mrow> </msubsup> <mo>,</mo> <mo> </mo> <msubsup> <mrow> <mi>ω</mi> </mrow> <mrow> <mi>c</mi> <mo>+</mo> <mn>1</mn> </mrow> <mrow> <mo>∗</mo> </mrow> </msubsup> <mo>,</mo> <mo> </mo> <msubsup> <mrow> <mi>ω</mi> </mrow> <mrow> <mi>k</mi> </mrow> <mrow> <mo>∗</mo> </mrow> </msubsup> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mi>ω</mi> </mrow> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> <mrow> <mo>∗</mo> </mrow> </msubsup> </mrow> </semantics></math> are anchor points in the point cloud path. C is the movement center of the agricultural vehicle chassis, representing the vehicle position. <span class="html-italic">η</span> is the nearest point from the vehicle position to the point cloud path. <span class="html-italic">Q</span> is the target point. <span class="html-italic">θ<sub>0</sub></span> is the heading angle of the agricultural vehicle. <span class="html-italic">θ</span> is the desired heading angle of the agricultural vehicle. <span class="html-italic">∆θ</span> is the heading error defined as the angle between vector <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>C</mi> <mi>Q</mi> </mrow> <mo>→</mo> </mover> </mrow> </semantics></math> and the heading of the vehicle. <span class="html-italic">e<sub>y</sub></span> is the lateral error defined as the perpendicular distance from C to vector <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <msubsup> <mrow> <mi>ω</mi> </mrow> <mrow> <mi>c</mi> </mrow> <mrow> <mo>∗</mo> </mrow> </msubsup> <msubsup> <mrow> <mi>ω</mi> </mrow> <mrow> <mi>c</mi> <mo>+</mo> <mn>1</mn> </mrow> <mrow> <mo>∗</mo> </mrow> </msubsup> </mrow> <mo>→</mo> </mover> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>The GUI.</p>
Full article ">Figure 7
<p>Coordinate transformation: Left-O-Up is the screen coordinate system. P<sub>1</sub> and P<sub>2</sub> are adjacent anchor points in the UTM coordinate system that form the local target path. P<sub>m</sub> is the midpoint of the local target path. <span class="html-italic">τ</span> is the angle between the local target path and the UTM northing. P′<sub>1</sub> and P′<sub>2</sub> are the anchor points after rotation of P<sub>1</sub> and P<sub>2</sub>, respectively. P<sub>scr1</sub> and P<sub>scr2</sub> are anchor points for converting the local target path to the screen. The screen resolution is <span class="html-italic">W<sub>scr</sub></span> × <span class="html-italic">H<sub>scr</sub></span>.</p>
Full article ">Figure 8
<p>Zigzag navigation tests with (<b>a</b>) the high-clearance sprayer in (<b>b</b>) different zigzag path curvatures for the same turn.</p>
Full article ">Figure 9
<p>The actual trajectories of the high-clearance sprayer following five target zigzag paths.</p>
Full article ">Figure 10
<p>Comparative analysis of RMS in response to changes in (<b>a</b>) curvature, (<b>b</b>) speed, and (<b>c</b>) spacing.</p>
Full article ">Figure 11
<p>The test scene.</p>
Full article ">Figure 12
<p>Actual trajectories and enlarged images of the high-clearance sprayer following the entry and exit paths.</p>
Full article ">
18 pages, 12089 KiB  
Article
Analysis of Interference Magnetic Field Characteristics of Underwater Gliders
by Taotao Xie, Dawei Xiao, Jiawei Zhang and Qing Ji
J. Mar. Sci. Eng. 2025, 13(2), 330; https://doi.org/10.3390/jmse13020330 - 11 Feb 2025
Viewed by 436
Abstract
Underwater gliders are a new type of unmanned underwater vehicle, characterized by high energy efficiency, long endurance, and low operational costs. They hold broad application prospects in fields such as ocean exploration, resource surveying, maritime surveillance, and military defense. This paper takes underwater [...] Read more.
Underwater gliders are a new type of unmanned underwater vehicle, characterized by high energy efficiency, long endurance, and low operational costs. They hold broad application prospects in fields such as ocean exploration, resource surveying, maritime surveillance, and military defense. This paper takes underwater gliders as the research subject, analyzing the characteristics of magnetic interference signals under different operational conditions. The study found that during full operational states, the motor’s operation generates interference signals at 17 Hz; during attitude adjustment, the movement of the moving block generates significant interference magnetic fields, especially during the forward and backward motion of the block, where interference signals at 20 Hz are particularly pronounced. To meet the objective of equipping underwater gliders with magnetic field sensors for underwater target detection, this paper proposes an adaptive filtering method based on the Recursive Least Squares (RLS) algorithm. The experimental results indicate that after filtering with the RLS algorithm, the amplitude of the noise signal has been reduced by over 60%, and it can effectively eliminate the noise components at 17 Hz and 20 Hz caused by the glider’s motor. This algorithm achieves an average increase in the signal-to-noise ratio (SNR) of 12 dB, which is equivalent to an approximately 80% improvement in accuracy. It significantly enhances the stability and signal-to-noise ratio of the magnetic field signals of underwater targets. This provides a feasible solution for equipping underwater gliders with magnetic field sensors for underwater target detection, holding important practical engineering significance. Full article
(This article belongs to the Special Issue Underwater Target Detection and Recognition)
Show Figures

Figure 1

Figure 1
<p>Structural diagram of underwater glider.</p>
Full article ">Figure 2
<p>Motion mechanism of underwater glider.</p>
Full article ">Figure 3
<p>Measurement method for magnetic field noise of underwater glider.</p>
Full article ">Figure 4
<p>Static magnetic field (glider inactive, the magnetic sensor is located at a horizontal distance of 10 cm in the middle of the glider).</p>
Full article ">Figure 5
<p>Underwater glider tail oil bag discharge (the magnetic sensor is located at a horizontal distance of 10 cm in the middle of the glider).</p>
Full article ">Figure 6
<p>Tail oil bag refill of underwater glider (the magnetic sensor is located at a horizontal distance of 10 cm in the middle of the glider).</p>
Full article ">Figure 7
<p>The counterweight slider shifts to the left (the magnetic sensor is located at a horizontal distance of 10 cm in the middle of the glider).</p>
Full article ">Figure 8
<p>The counterweight slider shifts to the right (the magnetic sensor is located at a horizontal distance of 10 cm in the middle of the glider).</p>
Full article ">Figure 9
<p>The counterweight slider moves forward (the magnetic sensor is located at a horizontal distance of 10 cm in the middle of the glider).</p>
Full article ">Figure 10
<p>The counterweight slider moves back (the magnetic sensor is located at a horizontal distance of 10 cm in the middle of the glider).</p>
Full article ">Figure 11
<p>Static magnetic field (glider inactive, the magnetic sensor is located 10 cm horizontally at the head of the glider).</p>
Full article ">Figure 12
<p>Underwater glider tail oil bag discharge (the magnetic sensor is located 10 cm horizontally at the head of the glider).</p>
Full article ">Figure 13
<p>Tail oil bag refill of underwater glider (the magnetic sensor is located 10 cm horizontally at the head of the glider).</p>
Full article ">Figure 14
<p>The counterweight slider shifts to the left (the magnetic sensor is located 10 cm horizontally at the head of the glider).</p>
Full article ">Figure 15
<p>The counterweight slider shifts to the right (the magnetic sensor is located 10 cm horizontally at the head of the glider).</p>
Full article ">Figure 16
<p>The counterweight slider moves forward (the magnetic sensor is located 10 cm horizontally at the head of the glider).</p>
Full article ">Figure 17
<p>The counterweight slider moves back (the magnetic sensor is located 10 cm horizontally at the head of the glider).</p>
Full article ">Figure 18
<p>The counterweight slider moves forward (after filtering).</p>
Full article ">Figure 19
<p>The counterweight slider moves back (after filtering).</p>
Full article ">Figure 20
<p>The counterweight slider moves forward (after filtering).</p>
Full article ">Figure 21
<p>The counterweight slider moves back (after filtering).</p>
Full article ">
26 pages, 5903 KiB  
Article
IoB Internet of Things (IoT) for Smart Built Environment (SBE): Understanding the Complexity and Contributing to Energy Efficiency; A Case Study in Mediterranean Climates
by Ignacio Martínez Ruiz, Enrique Cano Suñén, Álvaro Marco Marco and Ángel Fernández Cuello
Appl. Sci. 2025, 15(4), 1724; https://doi.org/10.3390/app15041724 - 8 Feb 2025
Viewed by 445
Abstract
To meet the 2050 targets about climate change and decarbonization, accomplishing thermal comfort, Internet of Things (IoT) ecosystems are key enabling technologies to move the Built Environment (BE) towards Smart Built Environment (SBE). The first contributions of this paper conceptualise SBE from its [...] Read more.
To meet the 2050 targets about climate change and decarbonization, accomplishing thermal comfort, Internet of Things (IoT) ecosystems are key enabling technologies to move the Built Environment (BE) towards Smart Built Environment (SBE). The first contributions of this paper conceptualise SBE from its dynamic and adaptative perspectives, considering the human habitat, and enunciate SBE as a multidimensional approach through six ways of inhabiting: defensive, projective, scientific, thermodynamic, subjective, and complex. From these premises, to analyse the performance indicators that characterise these multidisciplinary ways of inhabiting, an IoT-driven methodology is proposed: to deploy a sensor infrastructure to acquire experimental measurements; analyse data to convert them into context-aware information; and make knowledge-based decisions. Thus, this work tackles the inefficiency and high energy consumption of public buildings with the challenge of balancing energy efficiency and user comfort in dynamic scenarios. As current systems lack real-time adaptability, this work integrates an IoT-driven approach to enhance energy management and reduce discrepancies between measured temperatures and normative thresholds. Following the energy efficiency directives, the obtained results contribute to the following: understanding the complexity of the SBE by analysing its thermal performance, quantifying the potential of energy saving, and estimating its economic impact. The derived conclusions show that IoT-driven solutions allow the generation of real-data-based models on which to enhance SBE knowledge, by increasing energy efficiency and guaranteeing user comfort while minimising environmental effects and economic impact. Full article
Show Figures

Figure 1

Figure 1
<p>Modes of inhabiting. Evolution and relationship with monitored IoT data in this work. AI-generated images by <a href="http://pixlr.com" target="_blank">pixlr.com</a> (accessed on 20 December 2024).</p>
Full article ">Figure 2
<p>IoT ecosystem deployed in smart campus: (<b>a</b>) Methodology steps [<a href="#B49-applsci-15-01724" class="html-bibr">49</a>,<a href="#B50-applsci-15-01724" class="html-bibr">50</a>,<a href="#B51-applsci-15-01724" class="html-bibr">51</a>]; (<b>b</b>) Campus and buildings; (<b>c</b>) Labelled ubications; (<b>d</b>) Examples of geolocated infrastructure with detail of sensors.</p>
Full article ">Figure 2 Cont.
<p>IoT ecosystem deployed in smart campus: (<b>a</b>) Methodology steps [<a href="#B49-applsci-15-01724" class="html-bibr">49</a>,<a href="#B50-applsci-15-01724" class="html-bibr">50</a>,<a href="#B51-applsci-15-01724" class="html-bibr">51</a>]; (<b>b</b>) Campus and buildings; (<b>c</b>) Labelled ubications; (<b>d</b>) Examples of geolocated infrastructure with detail of sensors.</p>
Full article ">Figure 3
<p>Analysis of thermal evolution in the autumn–winter period.</p>
Full article ">Figure 4
<p>Analysis of thermal inertia: (<b>a</b>) November; (<b>b</b>) December; (<b>c</b>) January; (<b>d</b>) February.</p>
Full article ">Figure 5
<p>Analysis of thermal inertia (holyday period 1): (<b>a</b>) Weather; (<b>b</b>) Comparison by zones.</p>
Full article ">Figure 6
<p>Analysis of thermal inertia (holyday period 2): (<b>a</b>) Weather; (<b>b</b>) Comparison by zones.</p>
Full article ">Figure 7
<p>Analysis of heating influence.</p>
Full article ">Figure 8
<p>Analysis (detailed) of the heating influence: (<b>a</b>) Thermal evolution vs. heating; (<b>b</b>) Thermal evolution vs. heating vs. outdoor temperature.</p>
Full article ">Figure 9
<p>Analysis of thermal morphology: (<b>a</b>) Daily evolution; (<b>b</b>) Daily evolution vs. occupancy.</p>
Full article ">Figure 10
<p>Analysis of thermal morphology. Comparison with CO<sub>2</sub> levels and occupancy: (<b>a</b>) Daily evolution vs. occupancy; (<b>b</b>) Daily evolution vs. occupancy vs. CO<sub>2</sub> levels.</p>
Full article ">
27 pages, 12188 KiB  
Article
A Bio-Inspired Visual Network That Fuses Motion and Contrast Features for Detecting Small Moving Objects in Dynamic Complex Environments
by Jun Ling, Hecheng Meng and Deming Gong
Appl. Sci. 2025, 15(3), 1649; https://doi.org/10.3390/app15031649 - 6 Feb 2025
Viewed by 514
Abstract
In complex and dynamic environments, traditional motion detection techniques that rely on visual feature extraction face significant challenges when detecting and tracking small-sized moving objects. These difficulties primarily stem from the limited feature information inherent in small objects and the substantial interference caused [...] Read more.
In complex and dynamic environments, traditional motion detection techniques that rely on visual feature extraction face significant challenges when detecting and tracking small-sized moving objects. These difficulties primarily stem from the limited feature information inherent in small objects and the substantial interference caused by irrelevant information in complex backgrounds. Inspired by the intricate mechanisms for detecting small moving objects in insect brains, some bio-inspired systems have been designed to identify small moving objects in dynamic natural backgrounds. While these insect-inspired systems can effectively utilize motion information for object detection, they still suffer from limitations in suppressing complex background interference and accurately segmenting small objects, leading to a high rate of false positives from the complex background in their detection results. To overcome this limitation, inspired by insect visual neural structures, we propose a novel dual-channel visual network. The network first utilizes a motion detection channel to extract the target’s motion position information and track its trajectory. Simultaneously, a contrast detection channel extracts the target’s local contrast information. Then, based on the target’s motion trajectory, we determine the temporal variation trajectory of the target’s contrast. Finally, by comparing the temporal fluctuation characteristics of the contrast between the target and background false positives, the network can effectively distinguish between the target and background, thereby suppressing false positives. The experimental results show that the visual network performs excellently in terms of detection rate and precision, with an average detection rate of 0.81 and an average precision as high as 0.0968, which are significantly better than those of other comparative methods. This indicates that it has a significant advantage in suppressing false alarms and identifying small targets in complex dynamic environments. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) When an object is located at a significant distance from the imaging device, it occupies just a single pixel or a small number of pixels and lacks distinct physical features. The white circle of the image represents the small target (flying plane), and the top right corner red circle of the image represents the larger view of the small target. (<b>b</b>) The diagram of neuron information connection of the proposed visual network. The proposed visual network involves two parts, where one is the motion information detection channel, and the other is the contrast information detection channel. The external visual information is processed via ommatidia, LMC, TM3, TM1, and STMD neurons for detecting small targets’ motion in the motion information detection channel. Meanwhile, the contrast information of the object is extracted by AMC and T1 neurons in the contrast information detection pathway. Integrating contrast information and motion information in STMD neurons allows the detection of small objects’ motion.</p>
Full article ">Figure 2
<p>(<b>a</b>) The working mechanism of the motion information detection channel. When a small target passes through a pixel, the ommatidium output first decreases and then increases, eventually returning to its initial level. This change is caused by the entry and exit of the small target, respectively. The LMC cell perceives the luminance change and transmits the signal to Tm3 and Tm1 neurons. The Tm3 neuron responds immediately, while the Tm1 neuron has a delayed response. Finally, the STMD produces a strong response by multiplying the outputs of Tm3 and Tm1 neurons at the same time position. (<b>b</b>) The working mechanism of contrast information detection channel. When the small target passes through a pixel, the AMC cells first compute the brightness information of the target region and the surrounding background region, and then transmit this information to the T1 neurons to calculate the contrast information of the target.</p>
Full article ">Figure 3
<p>The working mechanism of the <math display="inline"><semantics> <mi mathvariant="script">CI</mi> </semantics></math>STMD model. The <math display="inline"><semantics> <mi mathvariant="script">CI</mi> </semantics></math>STMD model first obtains the positions of all features in extracting motion information by the STMD visual neural pathway and obtains the contrast information by calculating the contrast of all pixels. Then, the <math display="inline"><semantics> <mi mathvariant="script">CI</mi> </semantics></math>STMD model estimates the motion trajectory of possible targets by the position information of all features, and records the contrast information traces based on motion trajectory and contrast information. Finally, based on contrast information traces, we calculate the variance of each trace and apply the variance to help us distinguish the small object and false positives.</p>
Full article ">Figure 4
<p>(<b>a</b>) <math display="inline"><semantics> <mi mathvariant="script">CI</mi> </semantics></math>STMD model working diagram. Visual information is processed through a cascade of different types of neurons in the nervous system, ultimately leading to visual perception. (<b>b</b>) The left image shows the motion of a small object and a small-target-like object in a complicated background. The contrast between the small target and the background exhibits temporal variation, while the small-target-like object, being hidden in the background, maintains almost constant contrast with the background. The upper right figure displays that the STMD neurons generate different contrast response curves to time-varying and time-fixed contrast features. The lower right figure displays the variances of different contrast responses, in which the variance curve of the time-varying contrast target is higher, and the variance curve of the time-fixed contrast target is lower. Therefore, we can select contrast response variance as a criterion to help us distinguish the small target and false positives.</p>
Full article ">Figure 5
<p>Upper figure depicts the process of extracting the motion trajectory of small features. We continuously record the positions of small objects, and when two consecutively detected positions are close enough, we consider them to belong to the same trajectory. Using this approach, we can track the movement path of an object over a period of time. The following figure displays the complete trajectory extracted by the system.</p>
Full article ">Figure 6
<p>(<b>a</b>) The spatial dimensions of the small target and its surrounding local neighborhood, where <span class="html-italic">w</span> and <span class="html-italic">h</span> denote the width and height of the target, respectively. (<b>b</b>–<b>e</b>) The STMD model’s responsiveness to the dynamics of objects with assorted Weber contrast, movement velocity, width, and height attributes.</p>
Full article ">Figure 7
<p>(<b>a</b>) A <math display="inline"><semantics> <mrow> <mn>300</mn> <mo>×</mo> <mn>250</mn> </mrow> </semantics></math> pixel frame sampled from the synthetic dataset. A small, dark object (the target) is observed moving in a direction opposite to the complex background. Both the target and background exhibit constant velocities of 250 and 150 pixels per second, respectively. The markers <math display="inline"><semantics> <msub> <mi>V</mi> <mi>T</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>V</mi> <mi>B</mi> </msub> </semantics></math> denote the direction of movement for the true object and the backdrop, respectively. (<b>b</b>) The trajectory of the small target over a duration of 800 ms.</p>
Full article ">Figure 8
<p>(<b>a</b>) Input brightness profile. (<b>b</b>) Input brightness profile of fake target. (<b>c</b>) Ommatidial neural response. (<b>d</b>) Ommatidial neural response of fake target. (<b>e</b>) LMC neuron response. (<b>f</b>) LMC neuron response of fake target.</p>
Full article ">Figure 9
<p>(<b>a</b>) Tm3 neural response. (<b>b</b>) Tm3 neural response of fake target (dotted line). (<b>c</b>) Tm1 neural response. (<b>d</b>) Tm1 neural response of fake target (dotted line).</p>
Full article ">Figure 10
<p>(<b>a</b>) STMD neural response. (<b>b</b>) STMD neural response of fake target (dotted line).</p>
Full article ">Figure 11
<p>(<b>a</b>,<b>b</b>) Contrast information of the small target and fake features at a specified time point, <math display="inline"><semantics> <mrow> <msub> <mi>t</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>500</mn> </mrow> </semantics></math> ms. (<b>c</b>,<b>d</b>) Temporal evolution of contrast information of the small target and fake features, observed over the time interval <math display="inline"><semantics> <mrow> <mo>[</mo> <mn>0</mn> <mo>,</mo> <mn>800</mn> <mo>]</mo> </mrow> </semantics></math> ms.</p>
Full article ">Figure 12
<p>(<b>a</b>–<b>c</b>) Motion trajectories tracked by the <math display="inline"><semantics> <mi mathvariant="script">CI</mi> </semantics></math>STMD model without the incorporation of a contrast information detection channel, under various thresholds <math display="inline"><semantics> <mi>β</mi> </semantics></math> of <math display="inline"><semantics> <mrow> <mn>10</mn> <mo>,</mo> <mo> </mo> <mn>50</mn> </mrow> </semantics></math>, and 100, respectively. (<b>d</b>–<b>f</b>) Motion trajectories tracked by the <math display="inline"><semantics> <mi mathvariant="script">CI</mi> </semantics></math>STMD model with the integration of a contrast information detection channel, under various thresholds <math display="inline"><semantics> <mi>β</mi> </semantics></math> of <math display="inline"><semantics> <mrow> <mn>10</mn> <mo>,</mo> <mo> </mo> <mn>50</mn> </mrow> </semantics></math>, and 100, and a fixed contrast information standard deviation threshold <math display="inline"><semantics> <mi>γ</mi> </semantics></math> of <math display="inline"><semantics> <mrow> <mn>0.0025</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 13
<p>(<b>a</b>–<b>c</b>) Motion trajectories tracked by three different models: ESTMD, DSTMD, and <math display="inline"><semantics> <mi mathvariant="script">F</mi> </semantics></math>STMD. (<b>d</b>–<b>f</b>) The trajectories of movement, as sensed by the <math display="inline"><semantics> <mi mathvariant="script">CI</mi> </semantics></math>STMD approach, are investigated across a range of contrast information variance parameters <math display="inline"><semantics> <mi>γ</mi> </semantics></math>, at <math display="inline"><semantics> <mrow> <mn>0.001</mn> <mo>,</mo> <mo> </mo> <mn>0.002</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mn>0.0025</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 14
<p>The contrast information variances under different time. (<b>a</b>) The contrast information variances of the small target. (<b>b</b>) The contrast information variances of the fake feature. (<b>c</b>,<b>d</b>) Output signal after applying a contrast threshold.</p>
Full article ">Figure 15
<p>(<b>a</b>) The receiver operating characteristic (ROC) curve of the <math display="inline"><semantics> <mi mathvariant="script">CI</mi> </semantics></math>STMD model applied to the initial test video sequence. (<b>b</b>–<b>f</b>) The ROC curves that detail the outcomes of four different methodologies on the original sequence across multiple object size variations, brightness scales, motion velocities, and background motion velocities, with a constant false alarm rate of 5.</p>
Full article ">Figure 16
<p>(<b>a</b>,<b>c</b>,<b>e</b>) A sample frame from each of three test video sequences. A small moving object can be observed against a complex, real-world background. The arrows <math display="inline"><semantics> <msub> <mi>V</mi> <mi>B</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>V</mi> <mi>T</mi> </msub> </semantics></math> signify the movement orientation of the background and object, respectively. (<b>a</b>) Synthetic video one; (<b>c</b>) synthetic video two; (<b>e</b>) synthetic video three. (<b>b</b>,<b>d</b>,<b>f</b>) The receiver operating characteristic (ROC) curves illustrating the performance of four different methods on three test video sequences.</p>
Full article ">Figure 17
<p>Representative frames extracted from three real-world datasets and the receiver operating characteristic (ROC) curves illustrating the performance of four different methods on three real-world datasets. (<b>a</b>) Real video one; (<b>b</b>) real video two; (<b>c</b>) real video three.</p>
Full article ">
22 pages, 63900 KiB  
Article
Camera–LiDAR Wide Range Calibration in Traffic Surveillance Systems
by Byung-Jin Jang, Taek-Lim Kim and Tae-Hyoung Park
Sensors 2025, 25(3), 974; https://doi.org/10.3390/s25030974 - 6 Feb 2025
Viewed by 444
Abstract
In traffic surveillance systems, accurate camera–LiDAR calibration is critical for effective detection and robust environmental recognition. Due to the significant distances at which sensors are positioned to cover extensive areas and minimize blind spots, the calibration search space expands, increasing the complexity of [...] Read more.
In traffic surveillance systems, accurate camera–LiDAR calibration is critical for effective detection and robust environmental recognition. Due to the significant distances at which sensors are positioned to cover extensive areas and minimize blind spots, the calibration search space expands, increasing the complexity of the optimization process. This study proposes a novel target-less calibration method that leverages dynamic objects, specifically, moving vehicles, to constrain the calibration search range and enhance accuracy. To address the challenges of the expanded search space, we employ a genetic algorithm-based optimization technique, which reduces the risk of converging to local optima. Experimental results on both the TUM public dataset and our proprietary dataset indicate that the proposed method achieves high calibration accuracy, which is particularly suitable for traffic surveillance applications requiring wide-area calibration. This approach holds promise for enhancing sensor fusion accuracy in complex surveillance environments. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Example of camera and LiDAR sensor placement on a vehicle, (<b>b</b>) Example of camera and LiDAR sensor placement at an intersection.</p>
Full article ">Figure 2
<p>Proposed calibration system configuration diagram and feature extraction results.</p>
Full article ">Figure 3
<p>Flowchart of LiDAR feature extraction.</p>
Full article ">Figure 4
<p>Flowchart of camera feature extraction: background substraction, dynamic object detection, mophology operation, and canny edge extraction.</p>
Full article ">Figure 5
<p>Regeneration process of genetic algorithm.</p>
Full article ">Figure 6
<p>Comparison of accuracy by search range with existing edge-based and semantic-based methods. (<b>a</b>) Calibration RMSE median and deviation within [1°, 0.1 m]; (<b>b</b>) RMSE median and deviation within [5°, 0.5 m]; (<b>c</b>) RMSE median and deviation within [10°, 1.0 m].</p>
Full article ">Figure 7
<p>Proposed calibration qualitative results in TUM dataset. The first and second columns show the results before calibration. The third and fourth columns show the results after calibration.</p>
Full article ">Figure 8
<p>Comparison of RMSE values for rotation (roll, pitch, and yaw) and translation (tx, ty, and tz) parameters obtained from different feature and optimization methods (static grid search, static GA, dynamic grid search, and dynamic GA) using the TUM dataset. The box plots illustrate the distribution of RMSE values, emphasizing the calibration accuracy of each method.</p>
Full article ">Figure 9
<p>Proposed calibration qualitative results in own dataset. The first and second columns show the results before calibration. The third and fourth columns show the results after calibration.</p>
Full article ">
26 pages, 4698 KiB  
Article
Estimating Motion Parameters of Ground Moving Targets from Dual-Channel SAR Systems
by Kun Liu, Xiongpeng He, Guisheng Liao, Shengqi Zhu and Cao Zeng
Remote Sens. 2025, 17(3), 555; https://doi.org/10.3390/rs17030555 - 6 Feb 2025
Viewed by 433
Abstract
In dual-channel synthetic aperture radar (SAR) systems, the estimation of the four-dimensional motion parameters of the ground maneuvering target is a critical challenge. In particular, when spatial degrees of freedom are used to enhance the target’s output signal-to-clutter-plus-noise ratio (SCNR), it is possible [...] Read more.
In dual-channel synthetic aperture radar (SAR) systems, the estimation of the four-dimensional motion parameters of the ground maneuvering target is a critical challenge. In particular, when spatial degrees of freedom are used to enhance the target’s output signal-to-clutter-plus-noise ratio (SCNR), it is possible to have multiple solutions in the parameter estimation of the target. To deal with this issue, a novel algorithm for estimating the motion parameters of ground moving targets in dual-channel SAR systems is proposed in this paper. First, the random sample consensus (RANSAC) and modified adaptive 2D calibration (MA2DC) are used to prevent the target’s phase from being distorted as a result of channel balancing. To address range migration, the RFRT algorithm is introduced to achieve arbitrary-order range migration correction for moving targets, and the generalized scaled Fourier transform (GSCFT) algorithm is applied to estimate the polynomial coefficients of the target. Subsequently, we propose using the synthetic aperture length (SAL) of the target as an independent equation to solve for the four-dimensional parameter information and introduce a windowed maximum SNR method to estimate the SAL. Finally, a closed-form solution for the four-dimensional parameters of ground maneuvering targets is derived. Simulations and real data validate the effectiveness of the proposed algorithm. Full article
(This article belongs to the Special Issue Advanced Techniques of Spaceborne Surveillance Radar)
Show Figures

Figure 1

Figure 1
<p>Geometry relationship between a ground moving target and the SAR platform.</p>
Full article ">Figure 2
<p>The phase fitting results after the fifth iteration. (<b>a</b>) MA2DC. (<b>b</b>) MA2DC+RANSAC.</p>
Full article ">Figure 3
<p>Comparing the signal-to-noise Ratio (SNR) after applying RFRT using bandwidth and sampling frequency, respectively.</p>
Full article ">Figure 4
<p>The relationship between the search window length and the target’s SAL.</p>
Full article ">Figure 5
<p>Multiple target processing results. (<b>a</b>) The trajectory of targets. (<b>b</b>) The results of FFT along fast time. (<b>c</b>) RCMC by RFRT. (<b>d</b>) Cross interference of RFRT. (<b>e</b>) The results of GSCFT.</p>
Full article ">Figure 6
<p>The SAL estimation results and imaging results for Tar1. (<b>a</b>) Start time search result. (<b>b</b>) End time search result. (<b>c</b>) Ambiguity velocity search result. (<b>d</b>) Focusing result of Tar1.</p>
Full article ">Figure 7
<p>The SAL estimation results and imaging results for Tar2. (<b>a</b>) Start time search result. (<b>b</b>) End time search result. (<b>c</b>) Ambiguity velocity search result. (<b>d</b>) Focusing result of Tar2.</p>
Full article ">Figure 8
<p>The estimation performance of motion parameter. (<b>a</b>) Cross-track velocity. (<b>b</b>) Cross-track acceleration. (<b>c</b>) Along-track velocity. (<b>d</b>) Along-track acceleration.</p>
Full article ">Figure 9
<p>The interference phase results after processing by different channel balancing algorithms. (<b>a</b>) MA2DC. (<b>b</b>) RANSAC+MA2DC.</p>
Full article ">Figure 10
<p>Parameter estimation results of the proposed algorithm in a dual-channel airborne SAR system. (<b>a</b>) Two-dimensional time domain result before clutter suppression. (<b>b</b>) Target motion trajectory after clutter rejection. (<b>c</b>) RCMC result by RFRT. (<b>d</b>) GSCFT result. (<b>e</b>) RCMC results after the compensation function. (<b>f</b>) Final focusing result by the proposed method. (<b>g</b>) Moving target relocation result after proposed method. (<b>h</b>) Moving target relocation result after MA2DC. (The red diamond represents the detection result of the moving target, and the green circle represents the relocation result of the moving target).</p>
Full article ">Figure 11
<p>Focusing results of different algorithms with Ku-band.</p>
Full article ">Figure 12
<p>SAL results of ground moving target. (<b>a</b>) Start time search result. (<b>b</b>) Start time search local result. (<b>c</b>) End time search result. (<b>d</b>) End time search local result.</p>
Full article ">Figure 13
<p>Parameter estimation results results for two ground moving targets in a real airborne SAR system. (<b>a</b>) Range–Doppler domain results before clutter suppression. (<b>b</b>) Range–Doppler domain result after clutter suppression. (<b>c</b>) RCMC result of target1 by RFRT. (<b>d</b>) RCMC result of target 2 by RFRT. (<b>e</b>) GSCFT result of target 1 (<b>f</b>) GSCFT result of target 2. (<b>g</b>) The result of compensating of target 1 after Equation (<a href="#FD30-remotesensing-17-00555" class="html-disp-formula">30</a>). (<b>h</b>) The result of compensating of target 2 after Equation (<a href="#FD30-remotesensing-17-00555" class="html-disp-formula">30</a>). (<b>i</b>) Final focusing result of target 1 by the proposed method. (<b>j</b>) Final focusing result of target 2 by the proposed method. (<b>k</b>) Ambiguity search result of target’s radial velocity. (<b>l</b>) Relocation result of targets. (The red diamond represents the detection result of the moving target, and the green circle represents the relocation result of the moving target).</p>
Full article ">Figure 13 Cont.
<p>Parameter estimation results results for two ground moving targets in a real airborne SAR system. (<b>a</b>) Range–Doppler domain results before clutter suppression. (<b>b</b>) Range–Doppler domain result after clutter suppression. (<b>c</b>) RCMC result of target1 by RFRT. (<b>d</b>) RCMC result of target 2 by RFRT. (<b>e</b>) GSCFT result of target 1 (<b>f</b>) GSCFT result of target 2. (<b>g</b>) The result of compensating of target 1 after Equation (<a href="#FD30-remotesensing-17-00555" class="html-disp-formula">30</a>). (<b>h</b>) The result of compensating of target 2 after Equation (<a href="#FD30-remotesensing-17-00555" class="html-disp-formula">30</a>). (<b>i</b>) Final focusing result of target 1 by the proposed method. (<b>j</b>) Final focusing result of target 2 by the proposed method. (<b>k</b>) Ambiguity search result of target’s radial velocity. (<b>l</b>) Relocation result of targets. (The red diamond represents the detection result of the moving target, and the green circle represents the relocation result of the moving target).</p>
Full article ">Figure 14
<p>Focusing results of different algorithms with X-band. (<b>a</b>) Target 1. (<b>b</b>) Target 2.</p>
Full article ">
19 pages, 4007 KiB  
Article
Collaborative Control of UAV Swarms for Target Capture Based on Intelligent Control Theory
by Yuan Chi, Yijie Dong, Lei Zhang, Zhenyue Qiu, Xiaoyuan Zheng and Zequn Li
Mathematics 2025, 13(3), 413; https://doi.org/10.3390/math13030413 - 26 Jan 2025
Viewed by 681
Abstract
Real-time dynamic capture of a single moving target is one of the most crucial and representative tasks in UAV capture problems. This paper proposes a multi-UAV real-time dynamic capture strategy based on a differential game model to address this challenge. In this paper, [...] Read more.
Real-time dynamic capture of a single moving target is one of the most crucial and representative tasks in UAV capture problems. This paper proposes a multi-UAV real-time dynamic capture strategy based on a differential game model to address this challenge. In this paper, the dynamic capture problem is divided into two parts: pursuit and capture. First, in the pursuit–evasion problem based on differential games, the capture UAVs and the target UAV are treated as adversarial parties engaged in a game. The current pursuit–evasion state is modeled and analyzed according to varying environmental information, allowing the capture UAVs to quickly track the target UAV. The Nash equilibrium solution in the differential game is optimal for both parties in the pursuit–evasion process. Then, a collaborative multi-UAV closed circular pipeline control method is proposed to ensure an even distribution of capture UAVs around the target, preventing excessive clustering and thereby significantly improving capture efficiency. Finally, simulations and real-flight experiments are conducted on the RflySim platform in typical scenarios to analyze the computational process and verify the effectiveness of the proposed method. Results indicate that this approach effectively provides a solution for multi-UAV dynamic capture and achieves desirable capture outcomes. Full article
Show Figures

Figure 1

Figure 1
<p>Flowchart of UAV target capture algorithm based on differential game.</p>
Full article ">Figure 2
<p>Relative positioning of the capture UAV and target UAV.</p>
Full article ">Figure 3
<p>Positional relationship of each radius of the UAV and the closed circular pipeline. (<b>a</b>) Relative positions among the UAV radius. (<b>b</b>) Relative concepts of the closed circular pipeline.</p>
Full article ">Figure 4
<p>Real-time dynamic target point distribution of the pursuit UAVs and evader UAV based on the differential game algorithm.</p>
Full article ">Figure 5
<p>Location of the capture points.</p>
Full article ">Figure 6
<p>Physical schematic connection diagram.</p>
Full article ">Figure 7
<p>UAV capture results.</p>
Full article ">Figure 8
<p>Snapshots of real flight and reflective simulation of UAV capture. (<b>a</b>) Initial positions of the UAVs; (<b>b</b>) target search by capture UAVs; (<b>c</b>) UAV pursuit based on differential game; (<b>d</b>) capture initiates when the distance between the capture UAVs and the target UAV falls below the capture radius; (<b>e</b>) dynamic capture based on closed circular pipeline; (<b>f</b>) capture successfully completed.</p>
Full article ">Figure 8 Cont.
<p>Snapshots of real flight and reflective simulation of UAV capture. (<b>a</b>) Initial positions of the UAVs; (<b>b</b>) target search by capture UAVs; (<b>c</b>) UAV pursuit based on differential game; (<b>d</b>) capture initiates when the distance between the capture UAVs and the target UAV falls below the capture radius; (<b>e</b>) dynamic capture based on closed circular pipeline; (<b>f</b>) capture successfully completed.</p>
Full article ">Figure 9
<p>Distance between the capture UAVs and the target UAV.</p>
Full article ">Figure 10
<p>Distance between the fifth target UAV and the first capture UAV.</p>
Full article ">
14 pages, 2463 KiB  
Systematic Review
Wildlife Fences to Mitigate Human–Wildlife Conflicts in Africa: A Literature Analysis
by Jocelyn Weyala Burudi, Eszter Tormáné Kovács and Krisztián Katona
Diversity 2025, 17(2), 87; https://doi.org/10.3390/d17020087 - 25 Jan 2025
Viewed by 640
Abstract
The deployment of wildlife fences in Africa serves as a crucial intervention to balance wildlife conservation with human safety and agricultural productivity. This review synthesizes current research and case studies to provide a comprehensive understanding of the implications, benefits, and drawbacks of wildlife [...] Read more.
The deployment of wildlife fences in Africa serves as a crucial intervention to balance wildlife conservation with human safety and agricultural productivity. This review synthesizes current research and case studies to provide a comprehensive understanding of the implications, benefits, and drawbacks of wildlife fencing in Africa. Information was drawn from 54 articles selected through a thorough search of the Web of Science and Scopus databases. Results indicate that the primary reason for fencing was the mitigation of human–wildlife conflicts. Electric fences were the most commonly mentioned type, prominently used to protect agricultural lands from crop-raiding species. In addition, the prevention of livestock depredation and disease transmission was also an important driver for fencing. Elephants were the most studied species concerning wildlife fencing, and they caused the most damage to fences, creating pathways for other species to move beyond protected areas. Antelopes and large carnivores were also common targets for wildlife fences. Fences were found to be effective mainly against crop raiding particularly when well-maintained through frequent inspections for damages and permeability. Several authors documented challenges in fencing against primates, burrowers, and high-jumping species like leopards. The cost of fences varied depending on the materials, design, and maintenance, significantly impacting local communities near conservation areas. Despite their benefits, wildlife fences posed ecological challenges, such as habitat fragmentation and restricted animal movement, necessitating integrated management approaches that include wildlife corridors and crossing structures. This review provides insights for policymakers and conservationists to optimize the use of fences in the diverse environmental contexts of the African continent. Full article
(This article belongs to the Special Issue Human Wildlife Conflict across Landscapes—Second Edition)
Show Figures

Figure 1

Figure 1
<p>PRISMA flow diagram of the systematic review.</p>
Full article ">Figure 2
<p>Number of publications per year (<span class="html-italic">n</span> = 54) in Africa focusing on wildlife fences.</p>
Full article ">Figure 3
<p>The distribution of research articles (<span class="html-italic">n</span> = 54) by countries that reported the use of wildlife fences as a mitigation method for human–wildlife conflicts.</p>
Full article ">Figure 4
<p>Frequency of the reasons for wildlife fencing in Africa based on the number of publications (<span class="html-italic">n</span> = 54). In some articles multiple reasons for fencing were mentioned. HWC: Human–Wildlife Conflict.</p>
Full article ">Figure 5
<p>Frequency of utilization of the different types of fences in the mitigation of human–wildlife conflicts in Africa based on the number of publications (<span class="html-italic">n</span> = 54). Different colours distinguish the main varieties of fences. Bars representing electric fence types are marked in blue. In some articles, multiple types of fences were mentioned.</p>
Full article ">Figure 6
<p>Effectiveness of fences in mitigating different types of conflicts related to wildlife in Africa based on the reviewed publications (<span class="html-italic">n</span> = 54) according to their conclusions.</p>
Full article ">Figure 7
<p>The frequency of damage to fences used for mitigating human–wildlife conflicts in Africa due to different factors based on the number of publications (<span class="html-italic">n</span> = 54).</p>
Full article ">
17 pages, 4616 KiB  
Article
All-Metal Metamaterial-Based Sensor with Novel Geometry and Enhanced Sensing Capability at Terahertz Frequency
by Sagnik Banerjee, Ishani Ghosh, Carlo Santini, Fabio Mangini, Rocco Citroni and Fabrizio Frezza
Sensors 2025, 25(2), 507; https://doi.org/10.3390/s25020507 - 16 Jan 2025
Viewed by 706
Abstract
This research proposes an all-metal metamaterial-based absorber with a novel geometry capable of refractive index sensing in the terahertz (THz) range. The structure consists of four concentric diamond-shaped gold resonators on the top of a gold metal plate; the resonators increase in height [...] Read more.
This research proposes an all-metal metamaterial-based absorber with a novel geometry capable of refractive index sensing in the terahertz (THz) range. The structure consists of four concentric diamond-shaped gold resonators on the top of a gold metal plate; the resonators increase in height by 2 µm moving from the outer to the inner resonators, making the design distinctive. This novel configuration has played a very significant role in achieving multiple ultra-narrow resonant absorption peaks that produce very high sensitivity when employed as a refractive index sensor. Numerical simulations demonstrate that it can achieve six significant ultra-narrow absorption peaks within the frequency range of 5 to 8 THz. The sensor has a maximum absorptivity of 99.98% at 6.97 THz. The proposed absorber also produces very high-quality factors at each resonance. The average sensitivity is 7.57/Refractive Index Unit (THz/RIU), which is significantly high when compared to the current state of the art. This high sensitivity is instrumental in detecting smaller traces of samples that have very correlated refractive indices, like several harmful gases. Hence, the proposed metamaterial-based sensor can be used as a potential gas detector at terahertz frequency. Furthermore, the structure proves to be polarization-insensitive and produces a stable absorption response when the angle of incidence is increased up to 60°. At terahertz wavelength, the proposed design can be used for any value of the aforementioned angles, targeting THz spectroscopy-based biomolecular fingerprint detection and energy harvesting applications. Full article
(This article belongs to the Special Issue Recent Advances in THz Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Front view of the proposed design. (<b>b</b>) Side view of the proposed design. Geometrical dimensions of the structure are u = 86 µm, r = 40 µm, b = 6 µm, r<sub>1</sub> = 30 µm, r<sub>2</sub> = 20 µm, and r<sub>3</sub> = 10 µm, respectively. a = 2 µm, b = 6 µm, and t = 2 µm.</p>
Full article ">Figure 2
<p>Absorption spectra of the proposed structure with hexaband configuration.</p>
Full article ">Figure 3
<p>Side view of a conventional sensor [<a href="#B31-sensors-25-00507" class="html-bibr">31</a>,<a href="#B32-sensors-25-00507" class="html-bibr">32</a>].</p>
Full article ">Figure 4
<p>Comparison between the absorption plot of our proposed design and the conventional design (depicted in <a href="#sensors-25-00507-f003" class="html-fig">Figure 3</a>) [<a href="#B31-sensors-25-00507" class="html-bibr">31</a>,<a href="#B32-sensors-25-00507" class="html-bibr">32</a>].</p>
Full article ">Figure 5
<p>Results of parametric analysis plotting values of absorption A as a function of frequency for different values of unit cell dimensions <span class="html-italic">u</span> [µm] (<b>a</b>), ground plate thickness <span class="html-italic">t</span> [µm] (<b>b</b>), and height of the largest ring <span class="html-italic">b</span> [µm] (<b>c</b>).</p>
Full article ">Figure 5 Cont.
<p>Results of parametric analysis plotting values of absorption A as a function of frequency for different values of unit cell dimensions <span class="html-italic">u</span> [µm] (<b>a</b>), ground plate thickness <span class="html-italic">t</span> [µm] (<b>b</b>), and height of the largest ring <span class="html-italic">b</span> [µm] (<b>c</b>).</p>
Full article ">Figure 6
<p>Plots of simulated absorption spectra for different values of the polarization angle (<span class="html-italic">ϕ</span>) [deg].</p>
Full article ">Figure 7
<p>Plots of simulated absorption spectra for different values of incidence angles (<span class="html-italic">θ</span>) [deg].</p>
Full article ">Figure 8
<p>Real (blue solid line) and imaginary (red solid line) parts of the simulated impedance of the structure are plotted as a function of frequency.</p>
Full article ">Figure 9
<p>Simulated effective permittivity [F/m] (blue) and permeability [A/m] (red) of the structure are plotted as a function of frequency. The real and the imaginary parts are depicted in solid and dashed lines, respectively.</p>
Full article ">Figure 10
<p>Simulated surface current distribution 2D map at the resonant frequency of (<b>a</b>) 5.972 THz, (<b>b</b>) 6.272 THz, (<b>c</b>) 6.977 THz, (<b>d</b>) 7.067 THz, (<b>e</b>) 7.715 THz, and (<b>f</b>) 7.934 THz.</p>
Full article ">Figure 11
<p>Shift in the absorption peaks in the absorption spectrum of the structure when the refractive index increases from 1 to 1.05.</p>
Full article ">Figure 12
<p>Scatter plots of resonance frequency with respect to values of surrounding medium refractive index in the range from 1 to 1.05 with a step width of 1.01 for each absorption peak; (<b>a</b>) 1st Peak = 5.972 THz, (<b>b</b>) 2nd Peak = 6.272 THz, (<b>c</b>) 3rd Peak = 6.977 THz, (<b>d</b>) 4th Peak = 7.067 THz, (<b>e</b>) 5th Peak = 7.715 THz, and (<b>f</b>) 6th Peak = 7.934 THz.</p>
Full article ">Figure 12 Cont.
<p>Scatter plots of resonance frequency with respect to values of surrounding medium refractive index in the range from 1 to 1.05 with a step width of 1.01 for each absorption peak; (<b>a</b>) 1st Peak = 5.972 THz, (<b>b</b>) 2nd Peak = 6.272 THz, (<b>c</b>) 3rd Peak = 6.977 THz, (<b>d</b>) 4th Peak = 7.067 THz, (<b>e</b>) 5th Peak = 7.715 THz, and (<b>f</b>) 6th Peak = 7.934 THz.</p>
Full article ">
28 pages, 56964 KiB  
Article
Sequential Multimodal Underwater Single-Photon Lidar Adaptive Target Reconstruction Algorithm Based on Spatiotemporal Sequence Fusion
by Tian Rong, Yuhang Wang, Qiguang Zhu, Chenxu Wang, Yanchao Zhang, Jianfeng Li, Zhiquan Zhou and Qinghua Luo
Remote Sens. 2025, 17(2), 295; https://doi.org/10.3390/rs17020295 - 15 Jan 2025
Viewed by 610
Abstract
For the demand for long-range and high-resolution target reconstruction of slow-moving small underwater targets, research on single-photon lidar target reconstruction technology is being carried out. This paper reports the sequential multimodal underwater single-photon lidar adaptive target reconstruction algorithm based on spatiotemporal sequence fusion, [...] Read more.
For the demand for long-range and high-resolution target reconstruction of slow-moving small underwater targets, research on single-photon lidar target reconstruction technology is being carried out. This paper reports the sequential multimodal underwater single-photon lidar adaptive target reconstruction algorithm based on spatiotemporal sequence fusion, which has strong information extraction and noise filtering ability and can reconstruct the target depth and reflective intensity information from complex echo photon time counts and spatial pixel relationships. The method consists of three steps: data preprocessing, sequence-optimized extreme value inference filtering, and collaborative variation strategy for image optimization to achieve high-quality target reconstruction in complex underwater environments. Simulation and test results show that the target reconstruction method outperforms the current imaging algorithms, and the built single-photon lidar system achieves underwater lateral and distance resolution of 5 mm and 2.5cm@6AL, respectively. This indicates that the method has a great advantage in sparse photon counting imaging and possesses the capability of underwater target imaging under the background of strong light noise. It also provides a good solution for underwater target imaging of small slow-moving targets with long-distance and high-resolution. Full article
(This article belongs to the Special Issue Ocean Remote Sensing Based on Radar, Sonar and Optical Techniques)
Show Figures

Figure 1

Figure 1
<p>SUARF target reconstruction algorithm flow diagram.</p>
Full article ">Figure 2
<p>Schematic of data preprocessing of SUARF algorithm.</p>
Full article ">Figure 3
<p>Schematic diagram of the sequence optimized maximum value inference filtering.</p>
Full article ">Figure 4
<p>Schematic diagram of the system principle.</p>
Full article ">Figure 5
<p>Schematic diagram of the target.</p>
Full article ">Figure 6
<p>The reconstruction results of the SUARF algorithm on target A when the external light intensity is 0.1/2374/5450/44355 Lux.</p>
Full article ">Figure 7
<p>The reconstruction results of the SUARF algorithm on target B when the external light intensity is 0.1/1102/16172/44364 Lux.</p>
Full article ">Figure 8
<p>The reconstruction results of the SUARF algorithm for target A when the imaging distance is 10/25/40/55 m.</p>
Full article ">Figure 9
<p>The reconstruction results of the SUARF algorithm for target B when the imaging distance is 10/25/40/55 m.</p>
Full article ">Figure 9 Cont.
<p>The reconstruction results of the SUARF algorithm for target B when the imaging distance is 10/25/40/55 m.</p>
Full article ">Figure 10
<p>The reconstruction results of the SUARF algorithm for target A when the number of single pixel pulses is 5/50/100/500.</p>
Full article ">Figure 11
<p>The reconstruction results of the SUARF algorithm for target B when the number of single pixel pulses is 5/50/100/500.</p>
Full article ">Figure 12
<p>Schematic diagram of pool experiment and test target.</p>
Full article ">Figure 13
<p>Results of pool target reconstruction for SUARF and STMF algorithms.</p>
Full article ">Figure 13 Cont.
<p>Results of pool target reconstruction for SUARF and STMF algorithms.</p>
Full article ">Figure 14
<p>Results of pool target reconstruction for peak (<b>a1</b>,<b>a2</b>), cross-correlation (<b>b1</b>,<b>b2</b>), first photon (<b>c1</b>,<b>c2</b>), and first photon group (<b>d1</b>,<b>d2</b>) algorithms.</p>
Full article ">Figure 15
<p>Hollow target board and underwater visible light camera close shot.</p>
Full article ">Figure 16
<p>Imaging results of the SUARF target reconstruction algorithm for the hollow target plate. (<b>a</b>) Depth map, 32 pixels, 10 pulses. (<b>b</b>) Reflection intensity map, 32 pixels, 10 pulses. (<b>c</b>) Depth map, 64 pixels, 10 pulses. (<b>d</b>) Reflection intensity map, 64 pixels, 10 pulses. (<b>e</b>) Depth map, 128 pixels, 20 pulses. (<b>f</b>) Reflection intensity map, 128 pixels, 20 pulses.</p>
Full article ">Figure 17
<p>Experimental layout of SUARF target reconstruction algorithm in different waters.</p>
Full article ">Figure 18
<p>Targets used in pipeline test. (<b>a</b>) Metal cube. (<b>b</b>) Band target A. (<b>c</b>) Spoke-type target.</p>
Full article ">Figure 19
<p>Targets used in swimming pool test. (<b>a</b>) Metal cube. (<b>b</b>) Diagonal ladder distribution of metal blocks. (<b>c</b>) Spoke-type target. (<b>d</b>) Band target B. (<b>e</b>) Band target A.</p>
Full article ">Figure 20
<p>Targets used in the actual water off the coast of China test. (<b>a</b>) Metal cube. (<b>b</b>) Spoke-type target.</p>
Full article ">Figure 21
<p>Results of the target reconstruction in the pipeline.</p>
Full article ">Figure 21 Cont.
<p>Results of the target reconstruction in the pipeline.</p>
Full article ">Figure 22
<p>Imaging results under the inappropriate threshold of the first photon group algorithm.</p>
Full article ">Figure 23
<p>Image result of first photon group algorithm in the swimming pool.</p>
Full article ">Figure 23 Cont.
<p>Image result of first photon group algorithm in the swimming pool.</p>
Full article ">Figure 24
<p>Image result of STMF algorithm in the swimming pool.</p>
Full article ">Figure 25
<p>Image result of SUARF algorithm in the swimming pool.</p>
Full article ">Figure 25 Cont.
<p>Image result of SUARF algorithm in the swimming pool.</p>
Full article ">Figure 26
<p>Imaging results of the first photon group algorithm in actual water off the coast of China.</p>
Full article ">Figure 27
<p>Imaging results of the STMF algorithm in actual water off the coast of China.</p>
Full article ">Figure 28
<p>Imaging results of the SUARF algorithm in actual water off the coast of China.</p>
Full article ">
20 pages, 13199 KiB  
Article
Peripherally Restricted Activation of Opioid Receptors Influences Anxiety-Related Behaviour and Alters Brain Gene Expression in a Sex-Specific Manner
by Nabil Parkar, Wayne Young, Trent Olson, Charlotte Hurst, Patrick Janssen, Nick J. Spencer, Warren C. McNabb and Julie E. Dalziel
Int. J. Mol. Sci. 2024, 25(23), 13183; https://doi.org/10.3390/ijms252313183 - 7 Dec 2024
Viewed by 1208
Abstract
Although effects of stress-induced anxiety on the gastrointestinal tract and enteric nervous system (ENS) are well studied, how ENS dysfunction impacts behaviour is not well understood. We investigated whether ENS modulation alters anxiety-related behaviour in rats. We used loperamide, a potent μ-opioid receptor [...] Read more.
Although effects of stress-induced anxiety on the gastrointestinal tract and enteric nervous system (ENS) are well studied, how ENS dysfunction impacts behaviour is not well understood. We investigated whether ENS modulation alters anxiety-related behaviour in rats. We used loperamide, a potent μ-opioid receptor agonist that does not cross the blood–brain barrier, to manipulate ENS function and assess changes in behaviour, gut and brain gene expression, and microbiota profile. Sprague Dawley (male/female) rats were acutely dosed with loperamide (subcutaneous) or control solution, and their behavioural phenotype was examined using open field and elevated plus maze tests. Gene expression in the proximal colon, prefrontal cortex, hippocampus, and amygdala was assessed by RNA-seq and caecal microbiota composition determined by shotgun metagenome sequencing. In female rats, loperamide treatment decreased distance moved and frequency of supported rearing, indicating decreased exploratory behaviour and increased anxiety, which was associated with altered hippocampal gene expression. Loperamide altered proximal colon gene expression and microbiome composition in both male and female rats. Our results demonstrate the importance of the ENS for communication between gut and brain for normo-anxious states in female rats and implicate corticotropin-releasing hormone and gamma-aminobutyric acid gene signalling pathways in the hippocampus. This study also sheds light on sexually dimorphic communication between the gut and the brain. Microbiome and colonic gene expression changes likely reflect localised effects of loperamide related to gut dysmotility. These results suggest possible ENS pharmacological targets to alter gut to brain signalling for modulating mood. Full article
(This article belongs to the Special Issue Interactions between the Nervous System and Gastrointestinal Motility)
Show Figures

Figure 1

Figure 1
<p>Study design: (<b>a</b>) Rats were acclimatised to their new living environment for one week after which they were handled for one week; (<b>b</b>) On the day of the behaviour tests, rats were administered with loperamide or DMSO (control) two hours prior to the start of the behaviour testing (OF, EPM); (<b>c</b>) Rats were re-administered with loperamide or DMSO (control) the next day, two hours prior to sampling. Created in BioRender. (2024) <a href="http://BioRender.com/x26y637" target="_blank">BioRender.com/x26y637</a>.</p>
Full article ">Figure 2
<p>Open field test: (<b>a</b>) Distance moved; (<b>b</b>) Velocity of tracked movement; (<b>c</b>) Rearing frequency; (<b>d</b>) Coloured concentric circles are representative of different zones in the arena (red represents center or 25% zone; grey represents periphery or 100% zone); (<b>e</b>) Time spent in 25% or center zone of OF arena; (<b>f</b>) Time spent in the 100% zone or periphery of the OF arena. Asterisks indicate statistical significance (* <span class="html-italic">p</span> &lt; 0.05; ** <span class="html-italic">p</span> &lt; 0.01). Data shown as mean with error bars indicating SEM, <span class="html-italic">n</span> = 7–8 animals per treatment group.</p>
Full article ">Figure 3
<p>Elevated plus maze: (<b>a</b>) Distance moved; (<b>b</b>) Velocity of tracked movement; (<b>c</b>) Percent entries in open arms of the EPM; (<b>d</b>) Diagram of the EPM; (<b>e</b>) Graph showing % time spent in open arms of the EPM; (<b>f</b>) Percent time spent in center zone of the EPM. Asterisks indicate statistical significance (* <span class="html-italic">p</span> &lt; 0.05; ** <span class="html-italic">p</span> &lt; 0.01). Data shown as mean with error bars indicating SEM, <span class="html-italic">n</span> = 7–8 animals per treatment group.</p>
Full article ">Figure 4
<p>(<b>a</b>) Heatmap showing differentially expressed genes in the hippocampus of loperamide-treated (<span class="html-italic">n</span> = 7) and control (<span class="html-italic">n</span> = 8) female rats. The red and blue colour scale represents expression, with red being higher and blue being lower. The values are scaled by row, which means the actual expression (counts) has been converted to standard deviations above and below the median which is set at zero; (<b>b</b>) Reactome pathways differentially expressed by gene set enrichment analysis (<span class="html-italic">p</span> &lt; 0.05) in amygdala, hippocampus, and prefrontal cortex of female rats. Red circles indicate overall significantly higher expression in loperamide rats compared to controls and blue circles indicate overall significantly lower expression compared to controls. Grey circles indicate pathways not differentially expressed (<span class="html-italic">p</span> &gt; 0.05). The size of circles is proportional to the number of up- or downregulated genes.</p>
Full article ">Figure 5
<p>(<b>a</b>) Volcano plot of control versus loperamide-treated groups for differentially expressed genes in the proximal colon. Heatmaps showing the top 40 differentially expressed genes in the proximal colon of (<b>b</b>) female (<span class="html-italic">n</span> = 7) and (<b>c</b>) male rats (<span class="html-italic">n</span> = 8). The red and blue colour scale represents expression, with red being higher and blue being lower. The values are scaled by row, which means the actual expression (counts) has been converted to standard deviations above and below the median which is set at zero.</p>
Full article ">Figure 5 Cont.
<p>(<b>a</b>) Volcano plot of control versus loperamide-treated groups for differentially expressed genes in the proximal colon. Heatmaps showing the top 40 differentially expressed genes in the proximal colon of (<b>b</b>) female (<span class="html-italic">n</span> = 7) and (<b>c</b>) male rats (<span class="html-italic">n</span> = 8). The red and blue colour scale represents expression, with red being higher and blue being lower. The values are scaled by row, which means the actual expression (counts) has been converted to standard deviations above and below the median which is set at zero.</p>
Full article ">Figure 6
<p>(<b>a</b>) Taxonomic composition of the caecal microbiota at the family level. The <span class="html-italic">x</span> axis represents treatment, and the <span class="html-italic">y</span> axis represents relative abundance in percent. Low-abundance groups are the sum of all taxa outside of the 20 most abundant families. (<b>b</b>) Principal coordinate analysis (PCoA) plot of weighted UniFrac phylogenetic distances of caecal microbiotas from control (yellow) or loperamide (blue) groups, <span class="html-italic">n</span> = 8 males (squares) and 8 females (circles) per treatment group (PC1 vs. PC2). Percentages on axes indicate the proportion of variation explained by each dimension. Permutation analysis of variance indicated a significant difference between loperamide and control communities (ANOSIM <span class="html-italic">p</span> value = 0.001, R statistic = 0.449), ellipse depicts 75% confidence interval. (<b>c</b>) Box plots showing Bacteroides to be more abundant in loperamide-treated male and female rats compared to controls (median with 95% confidence intervals).</p>
Full article ">Figure 7
<p>Correlation between time spent in the closed arm of the EPM and other variables identified by the sPLS-DA algorithm. Variables are displayed in the circle grouped by the type of variables for: genes from PFC; prefrontal cortex; HIP, hippocampus; AMY, amygdala; GUT, proximal colon; TAXA are the different caecal microbiome taxa; KEGG are microbiome genes/KEGG orthologues; EPM, parameters from the elevated plus maze; OFT, parameters from the open field test. Variables with a correlation score &gt; 0.75 are joined by an orange line and variables with a correlation score &lt; −0.75 are joined by a blue line.</p>
Full article ">
19 pages, 6033 KiB  
Article
Anti-Epileptic Activity of Mitocurcumin in a Zebrafish–Pentylenetetrazole (PTZ) Epilepsy Model
by Alin Dumitru Ciubotaru, Carmen-Ecaterina Leferman, Bogdan-Emilian Ignat, Anton Knieling, Delia Lidia Salaru, Dana Mihaela Turliuc, Liliana Georgeta Foia, Lorena Dima, Bogdan Minea, Luminita Diana Hritcu, Bogdan Ionel Cioroiu, Laura Stoica, Ioan-Adrian Ciureanu, Alin Stelian Ciobica, Bogdan Alexandru Stoica and Cristina Mihaela Ghiciuc
Pharmaceuticals 2024, 17(12), 1611; https://doi.org/10.3390/ph17121611 - 29 Nov 2024
Cited by 1 | Viewed by 1078
Abstract
Background/Objectives: Ongoing challenges in epilepsy therapy warrant research on alternative treatments that offer improved efficacy and reduced side effects. Designed to enhance mitochondrial targeting and increase bioavailability, mitocurcumin (MitoCur) was evaluated for the first time as an antiepileptic agent, with curcumin (Cur) [...] Read more.
Background/Objectives: Ongoing challenges in epilepsy therapy warrant research on alternative treatments that offer improved efficacy and reduced side effects. Designed to enhance mitochondrial targeting and increase bioavailability, mitocurcumin (MitoCur) was evaluated for the first time as an antiepileptic agent, with curcumin (Cur) and sodium valproate (VPA), a standard antiepileptic drug, included for comparison. This study investigated the effects on seizure onset, severity, and progression in a zebrafish model of pentylenetetrazole (PTZ)-induced seizures and measured the concentrations of the compounds in brain tissue. Methods: Zebrafish were pre-treated with MitoCur and Cur (both at 0.25 and 0.5 µM doses) and VPA (0.25 and 0.5 mM) and observed for four minutes to establish baseline locomotor behavior. Subsequently, the animals were exposed to a 5 mM PTZ solution for 10 min, during which seizure progression was observed and scored as follows: 1—increased swimming; 2—burst swimming, left and right movements; 3—circular movements; 4—clonic seizure-like behavior; 5—loss of body posture. The studied compounds were quantified in brain tissue through HPLC and LC-MS. Results: Compared to the control group, all treatments reduced the distance moved and the average velocity, without significant differences between compounds or doses. During PTZ exposure, seizure latencies revealed that all treatments effectively delayed seizure onset up to score 4, demonstrating efficacy in managing moderate seizure activity. Notably, MitoCur also provided significant protection against the most severe seizure score (score 5). Brain tissue uptake analysis indicated that MitoCur achieved higher concentrations in the brain compared to Cur, at both doses. Conclusions: These results highlight the potential of MitoCur as a candidate for seizure management. Full article
(This article belongs to the Special Issue Targeted Therapies for Epilepsy)
Show Figures

Figure 1

Figure 1
<p>The effects of antiepileptic treatments on the locomotor and exploratory behavior of the zebrafish over a 4-min observation period, prior to seizure inducing agent exposure—violin plots illustrating the distribution of the values, the median (continuous line), and the interquartile range (dotted lines): (<b>a</b>) Total distance moved (cm); (<b>b</b>) Average velocity (cm/s). Statistical significance is indicated as ** <span class="html-italic">p</span> &lt; 0.01, *** <span class="html-italic">p</span> &lt; 0.001 (ordinary one-way ANOVA + Tukey’s multiple comparisons test). The <span class="html-italic">p</span> values were adjusted to account for multiple comparisons.</p>
Full article ">Figure 2
<p>Average final seizure scores after 10 min (mean with 95% confidence interval). MitoCur at 0.5 μM significantly reduced seizure scores compared to the control group (<span class="html-italic">p</span> = 0.0025). Statistical significance is indicated as ** <span class="html-italic">p</span> &lt; 0.01 (Kruskal–Wallis test + Dunn’s multiple comparisons test). The <span class="html-italic">p</span> values were adjusted to account for multiple comparisons.</p>
Full article ">Figure 3
<p>Kaplan–Meier survival curves illustrating the probability of zebrafish not reaching different seizure scores after anti-epileptic treatment followed by PTZ exposure: (<b>a</b>) probability of not reaching score 1; (<b>b</b>) probability of not reaching score 2; (<b>c</b>) probability of not reaching score 3; (<b>d</b>) probability of not reaching score 4; (<b>e</b>) probability of not reaching score 5. The Gehan–Breslow–Wilcoxon test was used to compare the curves. To compare curves two-by-two while accounting for multiple comparisons, the new significance threshold for the <span class="html-italic">p</span> values (<span class="html-italic">p</span> &lt; 0.0024) was calculated by using the Bonferroni method.</p>
Full article ">Figure 4
<p>Chromatograms: (<b>a</b>) Heptanoic acid (λ = 200 nm) (Rt = 5.234), Sodium valproate (λ = 200 nm) (Rt = 5.585); (<b>b</b>) Mitocurcumin (EIC + MS2(487.30) → 303.00), Rt = 6.5; (<b>c</b>) Curcumin (EIC − MS2(367.00) → 216.80), Rt = 5.0).</p>
Full article ">Figure 5
<p>Molecular spectra for (<b>a</b>) Cur (EIC − MS2 (367.0 → 216.80), (<b>b</b>) Quercetin (EIC − MS2 (301.0 <span class="html-italic">m</span>/<span class="html-italic">z</span> → 178.89 <span class="html-italic">m</span>/<span class="html-italic">z</span>), and (<b>c</b>) MitoCur (EIC + MS2(487.30) → 303.00).</p>
Full article ">Figure 6
<p>Comparison of limits of detection and limits of quantification for Cur (Reference values—Ramalingam et al. [<a href="#B31-pharmaceuticals-17-01611" class="html-bibr">31</a>], Kunati et al. [<a href="#B32-pharmaceuticals-17-01611" class="html-bibr">32</a>], and Hayun et al. [<a href="#B33-pharmaceuticals-17-01611" class="html-bibr">33</a>]) and VPA (Reference values—Vancea et al. [<a href="#B34-pharmaceuticals-17-01611" class="html-bibr">34</a>], Hara et al. [<a href="#B35-pharmaceuticals-17-01611" class="html-bibr">35</a>], and Gao et al. [<a href="#B36-pharmaceuticals-17-01611" class="html-bibr">36</a>]).</p>
Full article ">Figure 7
<p>Brain concentration after 30 min of exposure by immersion—violin plots illustrating the distribution of the values, the median (continuous line) and the interquartile range (dotted lines): (<b>a</b>) Cur and MitoCur (ordinary one-way ANOVA + Tukey’s multiple comparisons test); (<b>b</b>) VPA (unpaired <span class="html-italic">t</span> test). Statistical significance is indicated as **** <span class="html-italic">p</span> &lt; 0.0001. The <span class="html-italic">p</span> values were adjusted to account for multiple comparisons.</p>
Full article ">Figure 8
<p>The chemical structure of the investigated mitocurcumin. The structure was performed in ChemSketch (Freeware) 2024.1.3.</p>
Full article ">Figure 9
<p>Schematic representation of the experimental design. Zebrafish were treated with three anti-epileptic compounds for 30 min, at two concentrations each: 0.25 µM and 0.5 µM for Cur and MitoCur, and 0.25 mM and 0.5 mM for VPA. Behavioral analysis was conducted in aquarium water for 4 min. The animals were subsequently exposed to PTZ (pentylenetetrazol) for 10 min, to assess the antiseizure effects of the compounds. Afterward, HPLC/LC-MS analysis was performed on brain tissue extracts to quantify drug concentrations.</p>
Full article ">
3621 KiB  
Proceeding Paper
Indoor Received Signal Strength Indicator Measurements for Device-Free Target Sensing
by Alex Zhindon-Romero, Cesar Vargas-Rosales and Fidel Rodriguez-Corbo
Eng. Proc. 2024, 82(1), 44; https://doi.org/10.3390/ecsa-11-20491 - 26 Nov 2024
Viewed by 174
Abstract
For applications such as home surveillance systems and assisted living for elderly care, sensing capabilities are essential for tasks such as locating, determining the approximate position of a person, or identifying the status of a person (static or moving), since the effects caused [...] Read more.
For applications such as home surveillance systems and assisted living for elderly care, sensing capabilities are essential for tasks such as locating, determining the approximate position of a person, or identifying the status of a person (static or moving), since the effects caused by the presence of people can be captured in the power received by signals in an infrastructure deployed for these purposes. Human interference in Received Signal Strength Indicator (RSSI) measurements between different pairs of wireless nodes can vary depending on whether the target is moving or static. To test these ideas, an experiment was conducted using four nodes equipped with the ZigBee protocol in each corner of an empty 6.9 m × 8.1 m × 3.05 m room. These nodes were configured as routers, communicating with a coordinator outside the room that instructed the nodes to send back their pairwise RSSI measurements. The coordinator was connected to a computer in order to log the measurements, as well as the time at which the measurements were generated. The code was run for every iteration of the experiment, whether the target was static, moving, or when the number of targets was increased to five. The data were then statistically analyzed to extract patterns and other target relational parameters. There was a correlation between the change in the pairwise RSSI and the path described by the target when moving through the room. The data presented by the results can aid algorithms for device-free localization and crowd classification, with a low infrastructure cost for both, and shed light on the relevant characteristics correlated with the path and crowd size in indoor settings. Full article
Show Figures

Figure 1

Figure 1
<p>Classroom environment setup: (<b>a</b>) panoramic view; (<b>b</b>) schematic and device location.</p>
Full article ">Figure 2
<p>Baseline case.</p>
Full article ">Figure 3
<p>Five people standing still.</p>
Full article ">Figure 4
<p>Walking track (slow).</p>
Full article ">Figure 5
<p>The p-norm representation for 3 scenarios.</p>
Full article ">
Back to TopTop