Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (17)

Search Parameters:
Keywords = self-adaptive local prior

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 9507 KiB  
Article
Sparse SAR Imaging Based on Non-Local Asymmetric Pixel-Shuffle Blind Spot Network
by Yao Zhao, Decheng Xiao, Zhouhao Pan, Bingo Wing-Kuen Ling, Ye Tian and Zhe Zhang
Remote Sens. 2024, 16(13), 2367; https://doi.org/10.3390/rs16132367 - 28 Jun 2024
Viewed by 676
Abstract
The integration of Synthetic Aperture Radar (SAR) imaging technology with deep neural networks has experienced significant advancements in recent years. Yet, the scarcity of high-quality samples and the difficulty of extracting prior information from SAR data have experienced limited progress in this domain. [...] Read more.
The integration of Synthetic Aperture Radar (SAR) imaging technology with deep neural networks has experienced significant advancements in recent years. Yet, the scarcity of high-quality samples and the difficulty of extracting prior information from SAR data have experienced limited progress in this domain. This study introduces an innovative sparse SAR imaging approach using a self-supervised non-local asymmetric pixel-shuffle blind spot network. This strategy enables the network to be trained without labeled samples, thus solving the problem of the scarcity of high-quality samples. Through asymmetric pixel-shuffle downsampling (AP) operation, the spatial correlation between pixels is broken so that the blind spot network can adapt to the actual scene. The network also incorporates a non-local module (NLM) into its blind spot architecture, enhancing its capability to analyze a broader range of information and extract more comprehensive prior knowledge from SAR data. Subsequently, Plug and Play (PnP) technology is used to integrate the trained network into the sparse SAR imaging model to solve the regularization term problem. The optimization of the inverse problem is achieved through the Alternating Direction Method of Multipliers (ADMM) algorithm. The experimental results of the unlabeled samples demonstrate that our method significantly outperforms traditional techniques in reconstructing images across various regions. Full article
(This article belongs to the Special Issue Advances in Radar Imaging with Deep Learning Algorithms)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Differences between blind spot networks and conventional networks. (<b>a</b>) Conventional networks, when processing a single pixel, rely on the information provided by its surrounding pixels, including the pixel itself. (<b>b</b>) Blind spot networks, when processing a single pixel, rely on the information provided by its surrounding pixels and exclude the pixel itself.</p>
Full article ">Figure 2
<p>Simplified architecture of asymmetric pixel-shuffle blind spot network.</p>
Full article ">Figure 3
<p>Illustration of NLAPBSN architecture.</p>
Full article ">Figure 4
<p>Effects of stride factors a and b.</p>
Full article ">Figure 5
<p>Imaging results for simulated data at an 80% undersampling ratio. (<b>a</b>) CSA; (<b>b</b>) L1 + TV; (<b>c</b>) PPB; (<b>d</b>) SAE-Net; (<b>e</b>) proposed method.</p>
Full article ">Figure 6
<p>Imaging results for simulated data at 60% undersampling ratio. (<b>a</b>) CSA; (<b>b</b>) L1 + TV; (<b>c</b>) PPB; (<b>d</b>) SAE-Net; (<b>e</b>) proposed method.</p>
Full article ">Figure 7
<p>Results for surface targets of different shapes. (<b>a</b>) Original reference images; (<b>b</b>) images with substantial noise; (<b>c</b>) images processed using the proposed method.</p>
Full article ">Figure 8
<p>Performance analysis of the non-local module adapted to the blind-spot network. (<b>a</b>) PSNR, (<b>b</b>) SSIM.</p>
Full article ">Figure 9
<p>Performance analysis of the asymmetric pixel-shuffling downsampling operation. (<b>a</b>) PSNR, (<b>b</b>) SSIM.</p>
Full article ">Figure 10
<p>Imaging results of plain region with different methods. (<b>a</b>) CSA; (<b>b</b>) L1 + TV; (<b>c</b>) PPB; (<b>d</b>) SAE-Net; (<b>e</b>) proposed method.</p>
Full article ">Figure 11
<p>Imaging results using different methods. (<b>a</b>) CSA; (<b>b</b>) L1 + TV; (<b>c</b>) PPB; (<b>d</b>) SAE-Net; (<b>e</b>) proposed method.</p>
Full article ">Figure 12
<p>Imaging results of the ferry terminal area using different methods. (<b>a</b>) CSA; (<b>b</b>) L1 + TV; (<b>c</b>) PPB; (<b>d</b>) SAE-Net; (<b>e</b>) proposed method.</p>
Full article ">Figure 13
<p>Imaging results of ships comparison across different methods and sampling ratios. The first row (<b>a</b>–<b>e</b>): Full Sampling Results (CSA, L1+TV, PPB, SAE-Net, Proposed method); the second row (<b>f</b>–<b>j</b>) 80% Sampling Results (CSA, L1+TV, PPB, SAE-Net, Proposed method).</p>
Full article ">
22 pages, 11135 KiB  
Article
Multi-UAV Cooperative Localization Using Adaptive Wasserstein Filter with Distance-Constrained Bare Bones Self-Recovery Particles
by Xiuli Xin, Feng Pan, Yuhe Wang and Xiaoxue Feng
Drones 2024, 8(6), 234; https://doi.org/10.3390/drones8060234 - 30 May 2024
Viewed by 672
Abstract
Aiming at the cooperative localization problem for the dynamic UAV swarm in an anchor-limited environment, an adaptive Wasserstein filter (AWF) with distance-constrained bare bones self-recovery particles (CBBP) is proposed. Firstly, to suppress the cumulative error from the inertial navigation system (INS), a position-prediction [...] Read more.
Aiming at the cooperative localization problem for the dynamic UAV swarm in an anchor-limited environment, an adaptive Wasserstein filter (AWF) with distance-constrained bare bones self-recovery particles (CBBP) is proposed. Firstly, to suppress the cumulative error from the inertial navigation system (INS), a position-prediction strategy based on transition particles is designed instead of using inertial measurements directly, which ensures that the generated prior particles can better cover the ground truth and provide the uncertainties of nonlinear estimation. Then, to effectively quantify the difference between the observed and the prior data, the Wasserstein measure based on slice segmentation is introduced to update the posterior weights of the particles, which makes the proposed algorithm robust against distance-measurement noise variance under the strongly nonlinear model. In addition, to solve the problem of particle impoverishment caused by traditional resampling, a diversity threshold based on Gini purity is designed, and a fast bare bones particle self-recovery algorithm with distance constraint is proposed to guide the outlier particles to the high-likelihood region, which effectively improves the accuracy and stability of the estimation. Finally, the simulation results show that the proposed algorithm is robust against cumulative error in an anchor-limited environment and achieves more competitive accuracy with fewer particles. Full article
Show Figures

Figure 1

Figure 1
<p>Navigation coordinates of cooperative localization.</p>
Full article ">Figure 2
<p>The flowchart of the AWF-CBBP algorithm.</p>
Full article ">Figure 3
<p>The diagram of prior particle design.</p>
Full article ">Figure 4
<p>Position results of different algorithms for two trajectories: (<b>a</b>) Straight-line trajectory. (<b>b</b>) Counterclockwise-curve trajectory.</p>
Full article ">Figure 5
<p>Mean error distance of different methods: (<b>a</b>) Straight-line trajectory. (<b>b</b>) Counterclockwise-curve trajectory.</p>
Full article ">Figure 6
<p>Cumulative function distribution curve.</p>
Full article ">Figure 7
<p>Contour plots of the likelihood function with different measures for a single UAV: (<b>a</b>) Wasserstein distance. (<b>b</b>) Euclidean distance. (<b>c</b>) KL divergence.</p>
Full article ">Figure 8
<p>Influence of noise variance on localization performance of algorithms.</p>
Full article ">Figure 9
<p>The error distribution in the anchor-limited scenario.</p>
Full article ">Figure 10
<p>Cumulative function distribution curve.</p>
Full article ">Figure 11
<p>Results of ablation experiments: (<b>a</b>) Root mean square error. (<b>b</b>) Particle diversity index.</p>
Full article ">Figure 12
<p>Influence of the number of particles on the localization performance of the algorithm.</p>
Full article ">
25 pages, 8266 KiB  
Article
Infrared Small Target Detection Based on Tensor Tree Decomposition and Self-Adaptive Local Prior
by Guiyu Zhang, Zhenyu Ding, Qunbo Lv, Baoyu Zhu, Wenjian Zhang, Jiaao Li and Zheng Tan
Remote Sens. 2024, 16(6), 1108; https://doi.org/10.3390/rs16061108 - 21 Mar 2024
Viewed by 1321
Abstract
Infrared small target detection plays a crucial role in both military and civilian systems. However, current detection methods face significant challenges in complex scenes, such as inaccurate background estimation, inability to distinguish targets from similar non-target points, and poor robustness across various scenes. [...] Read more.
Infrared small target detection plays a crucial role in both military and civilian systems. However, current detection methods face significant challenges in complex scenes, such as inaccurate background estimation, inability to distinguish targets from similar non-target points, and poor robustness across various scenes. To address these issues, this study presents a novel spatial–temporal tensor model for infrared small target detection. In our method, we introduce the tensor tree rank to capture global structure in a more balanced strategy, which helps achieve more accurate background estimation. Meanwhile, we design a novel self-adaptive local prior weight by evaluating the level of clutter and noise content in the image. It mitigates the imbalance between target enhancement and background suppression. Then, the spatial–temporal total variation (STTV) is used as a joint regularization term to help better remove noise and obtain better detection performance. Finally, the proposed model is efficiently solved by the alternating direction multiplier method (ADMM). Extensive experiments demonstrate that our method achieves superior detection performance when compared with other state-of-the-art methods in terms of target enhancement, background suppression, and robustness across various complex scenes. Furthermore, we conduct an ablation study to validate the effectiveness of each module in the proposed model. Full article
(This article belongs to the Topic Computer Vision and Image Processing, 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Flowchart of the proposed TTALP-TV model for infrared small target detection.</p>
Full article ">Figure 2
<p>The diagram of tensor tree decomposition.</p>
Full article ">Figure 3
<p>Singular value distribution curves of infrared spatial–temporal tensor along each mode.</p>
Full article ">Figure 4
<p>Comparison of different local structure priors. Row 1 shows original infrared images. Rows 2 to 6 depict different local prior maps, obtained by Equation (13), RIPT, PSTNN, MFSTPT, and the proposed method, respectively. Columns (<b>a</b>–<b>d</b>) display the prior weights extracted using different calculation methods for four infrared image sequences.</p>
Full article ">Figure 5
<p>Representative frames corresponding to the six infrared sequences used in the experiments.</p>
Full article ">Figure 6
<p>Diagram of the target neighborhood.</p>
Full article ">Figure 7
<p>Three-dimensional ROC curves corresponding to different parameters of <math display="inline"><semantics> <mrow> <mi>L</mi> </mrow> </semantics></math> in the six sequences.</p>
Full article ">Figure 8
<p>Three-dimensional ROC curves corresponding to different parameters of <math display="inline"><semantics> <mrow> <mi>H</mi> </mrow> </semantics></math> in the six sequences.</p>
Full article ">Figure 9
<p>Ablation results of the six sequences in 3D ROC curves.</p>
Full article ">Figure 10
<p>Detection results of TTALP-TV under different noise intensities.</p>
Full article ">Figure 11
<p>Detection results of nine methods in sequences 1–3. The red rectangles denote target areas, and the blue ellipses denote noise and background residuals.</p>
Full article ">Figure 12
<p>Detection results of nine methods in sequences 4–6. The red rectangles denote target areas, and the blue ellipses denote noise and background residuals.</p>
Full article ">Figure 13
<p>Three-dimensional ROC curves of nine methods in sequences 1–6.</p>
Full article ">
26 pages, 10606 KiB  
Article
Correlative Scan Matching Position Estimation Method by Fusing Visual and Radar Line Features
by Yang Li, Xiwei Cui, Yanping Wang and Jinping Sun
Remote Sens. 2024, 16(1), 114; https://doi.org/10.3390/rs16010114 - 27 Dec 2023
Viewed by 1280
Abstract
Millimeter-wave radar and optical cameras are one of the primary sensing combinations for autonomous platforms such as self-driving vehicles and disaster monitoring robots. The millimeter-wave radar odometry can perform self-pose estimation and environmental mapping. However, cumulative errors can arise during extended measurement periods. [...] Read more.
Millimeter-wave radar and optical cameras are one of the primary sensing combinations for autonomous platforms such as self-driving vehicles and disaster monitoring robots. The millimeter-wave radar odometry can perform self-pose estimation and environmental mapping. However, cumulative errors can arise during extended measurement periods. In particular scenes where loop closure conditions are absent and visual geometric features are discontinuous, existing loop detection methods based on back-end optimization face challenges. To address this issue, this study introduces a correlative scan matching (CSM) pose estimation method that integrates visual and radar line features (VRL-SLAM). By making use of the pose output and the occupied grid map generated by the front end of the millimeter-wave radar’s simultaneous localization and mapping (SLAM), it compensates for accumulated errors by matching discontinuous visual line features and radar line features. Firstly, a pose estimation framework that integrates visual and radar line features was proposed to reduce the accumulated errors generated by the odometer. Secondly, an adaptive Hough transform line detection method (A-Hough) based on the projection of the prior radar grid map was introduced, eliminating interference from non-matching lines, enhancing the accuracy of line feature matching, and establishing a collection of visual line features. Furthermore, a Gaussian mixture model clustering method based on radar cross-section (RCS) was proposed, reducing the impact of radar clutter points online feature matching. Lastly, actual data from two scenes were collected to compare the algorithm proposed in this study with the CSM algorithm and RI-SLAM.. The results demonstrated a reduction in long-term accumulated errors, verifying the effectiveness of the method. Full article
(This article belongs to the Special Issue Environmental Monitoring Using UAV and Mobile Mapping Systems)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the local coordinate system.</p>
Full article ">Figure 2
<p>Global coordinate system diagram.</p>
Full article ">Figure 3
<p>Schematic diagram of the pixel coordinate system.</p>
Full article ">Figure 4
<p>Method flowchart.</p>
Full article ">Figure 5
<p>Schematic diagram of the VRL-SLAM method.</p>
Full article ">Figure 6
<p>The flowchart of the A-Hough method.</p>
Full article ">Figure 7
<p>Comparative illustration of GMM clustering with and without RCS weighting.</p>
Full article ">Figure 8
<p>Schematic diagram for category selection in the Gaussian mixture model based on RCS.</p>
Full article ">Figure 9
<p>Data collection equipment.</p>
Full article ">Figure 10
<p>Correspondence between optical images and millimeter-wave radar frame sequences.</p>
Full article ">Figure 11
<p>Optical image of Experimental Scene: (<b>a</b>) Scene 1: Moving Scene, (<b>b</b>) Scene 2: Parking Lot Scene.</p>
Full article ">Figure 12
<p>Experimental results for Moving Scene: (<b>a</b>) global grid map; (<b>b</b>) projection of the global grid map onto the optical image; (<b>c</b>) results of A-Hough; (<b>d</b>) results of Hough transform line detection method.</p>
Full article ">Figure 13
<p>Experimental results for Parking Lot Scene: (<b>a</b>) global grid map; (<b>b</b>) projection of the global grid map onto the optical image; (<b>c</b>) results of A-Hough; (<b>d</b>) results of Hough transform line detection method.</p>
Full article ">Figure 13 Cont.
<p>Experimental results for Parking Lot Scene: (<b>a</b>) global grid map; (<b>b</b>) projection of the global grid map onto the optical image; (<b>c</b>) results of A-Hough; (<b>d</b>) results of Hough transform line detection method.</p>
Full article ">Figure 14
<p>Experimental results for Moving Scene: (<b>a</b>) global grid map; (<b>b</b>) clustering results for the visual line feature set; (<b>c</b>) results from R-GMM clustering; (<b>d</b>) results from GMM clustering.</p>
Full article ">Figure 14 Cont.
<p>Experimental results for Moving Scene: (<b>a</b>) global grid map; (<b>b</b>) clustering results for the visual line feature set; (<b>c</b>) results from R-GMM clustering; (<b>d</b>) results from GMM clustering.</p>
Full article ">Figure 15
<p>Experimental results for Parking Lot Scene: (<b>a</b>) global grid map; (<b>b</b>) clustering results for the visual line feature set; (<b>c</b>) results from R-GMM clustering; (<b>d</b>) results from GMM clustering.</p>
Full article ">Figure 16
<p>Comparison of CSM and VRL-SLAM poses for Scene 1.</p>
Full article ">Figure 17
<p>Comparison of CSM and VRL-SLAM poses for Scene 2.</p>
Full article ">Figure 18
<p>The Y-direction difference between CSM and VRL-SLAM in Scene 1.</p>
Full article ">Figure 19
<p>The Y-direction difference between CSM and VRL-SLAM in Scene 2 From <a href="#remotesensing-16-00114-f018" class="html-fig">Figure 18</a> and <a href="#remotesensing-16-00114-f019" class="html-fig">Figure 19</a>, it can be intuitively observed that both methods initially have certain errors. However, the VRL-SLAM method proposed in this paper gradually reduces its error over prolonged measurements, tending towards smoothness, which validates the effectiveness of the method.</p>
Full article ">Figure 20
<p>Comparison of RI-SLAM and VRL-SLAM poses for Scene 1.</p>
Full article ">Figure 21
<p>Comparison of RI-SLAM and VRL-SLAM poses for Scene 2.</p>
Full article ">
27 pages, 7236 KiB  
Article
Three Chaotic Strategies for Enhancing the Self-Adaptive Harris Hawk Optimization Algorithm for Global Optimization
by Sultan Almotairi, Elsayed Badr, Mustafa Abdul Salam and Alshimaa Dawood
Mathematics 2023, 11(19), 4181; https://doi.org/10.3390/math11194181 - 6 Oct 2023
Cited by 3 | Viewed by 1480
Abstract
Harris Hawk Optimization (HHO) is a well-known nature-inspired metaheuristic model inspired by the distinctive foraging strategy and cooperative behavior of Harris Hawks. As with numerous other algorithms, HHO is susceptible to getting stuck in local optima and has a sluggish convergence rate. Several [...] Read more.
Harris Hawk Optimization (HHO) is a well-known nature-inspired metaheuristic model inspired by the distinctive foraging strategy and cooperative behavior of Harris Hawks. As with numerous other algorithms, HHO is susceptible to getting stuck in local optima and has a sluggish convergence rate. Several techniques have been proposed in the literature to improve the performance of metaheuristic algorithms (MAs) and to tackle their limitations. Chaos optimization strategies have been proposed for many years to enhance MAs. There are four distinct categories of Chaos strategies, including chaotic mapped initialization, randomness, iterations, and controlled parameters. This paper introduces SHHOIRC, a novel hybrid algorithm designed to enhance the efficiency of HHO. Self-adaptive Harris Hawk Optimization using three chaotic optimization methods (SHHOIRC) is the proposed algorithm. On 16 well-known benchmark functions, the proposed hybrid algorithm, authentic HHO, and five HHO variants are evaluated. The computational results and statistical analysis demonstrate that SHHOIRC exhibits notable similarities to other previously published algorithms. The proposed algorithm outperformed the other algorithms by 81.25%, compared to 18.75% for the prior algorithms, by obtaining the best average solutions for 13 benchmark functions. Furthermore, the proposed algorithm is tested on a real-life problem, which is the maximum coverage problem of Wireless Sensor Networks (WSNs), and compared with pure HHO, and two well-known algorithms, Grey Wolf Optimization (GWO) and Whale Optimization Algorithm (WOA). For the maximum coverage experiments, the proposed algorithm demonstrated superior performance, surpassing other algorithms by obtaining the best coverage rates of 95.4375% and 97.125% for experiments 1 and 2, respectively. Full article
(This article belongs to the Special Issue Advanced Optimization Methods and Applications, 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>The flowchart of SHHOIRC.</p>
Full article ">Figure 2
<p>Average convergence curve obtained by 7 HHO versions for 16 benchmark functions (500 iterations).</p>
Full article ">Figure 2 Cont.
<p>Average convergence curve obtained by 7 HHO versions for 16 benchmark functions (500 iterations).</p>
Full article ">Figure 3
<p>Maximum coverage convergence curve of average fitness obtained by the 4 algorithms (500 iterations): (<b>a</b>) for the average solutions and (<b>b</b>) for the best solutions.</p>
Full article ">Figure 4
<p>(<b>a</b>,<b>b</b>) Initial and final distribution of the best solution obtained by SHHOIRC (500 iterations).</p>
Full article ">Figure 5
<p>Maximum coverage convergence curve of average fitness obtained by the 4 algorithms (1000 iterations): (<b>a</b>) for the average solutions and (<b>b</b>) for the best solutions.</p>
Full article ">Figure 6
<p>(<b>a</b>,<b>b</b>) Initial and final distribution of the best solution obtained by SHHOIRC (1000 iterations).</p>
Full article ">Figure 7
<p>(<b>a</b>,<b>b</b>) boxplot of Experiments 1 and 2.</p>
Full article ">
15 pages, 561 KiB  
Article
Parent and Carer Skills Groups in Dialectical Behaviour Therapy for High-Risk Adolescents with Severe Emotion Dysregulation: A Mixed-Methods Evaluation of Participants’ Outcomes and Experiences
by Lindsay Smith, Katrina Hunt, Sam Parker, Jake Camp, Catherine Stewart and Andre Morris
Int. J. Environ. Res. Public Health 2023, 20(14), 6334; https://doi.org/10.3390/ijerph20146334 - 10 Jul 2023
Cited by 5 | Viewed by 2732
Abstract
Background: There is an established evidence-base for dialectical behaviour therapy for adolescents (DBT-A) in the treatment of young people with severe emotion dysregulation and related problems, including repeated self-harm and suicidal behaviours. However, few studies have reported on parental involvement in such treatments. [...] Read more.
Background: There is an established evidence-base for dialectical behaviour therapy for adolescents (DBT-A) in the treatment of young people with severe emotion dysregulation and related problems, including repeated self-harm and suicidal behaviours. However, few studies have reported on parental involvement in such treatments. This study aims to explore the outcomes and experiences of participants of a dedicated skills group for parents and carers embedded within an adapted DBT-A programme in the United Kingdom. Method: This study was conducted within a specialist outpatient Child and Adolescent Mental Health Services (CAMHS) DBT programme in the National Health Service (NHS) in London. Participants were parents and carers of adolescents engaged in the DBT-A programme. Participants attended a 6-month parent and carer skills group intervention and completed self-report measures relating to carer distress, communication and family functioning, at pre-intervention and post-intervention. Following the intervention, semi-structured interviews were also completed with a subgroup of participants to explore their experiences of the skills group and how they perceived its effectiveness. Quantitative and qualitative methods were used to analyse the data collected from participants. Results: Forty-one parents and carers completed the intervention. Participants reported a number of statistically significant changes from pre- to post-intervention: general levels of distress and problems in family communication decreased, while perceived openness of family communication and strengths and adaptability in family functioning increased. A thematic analysis of post-intervention interviews examining participant experiences identified six themes: (1) experiences prior to DBT; (2) safety in DBT; (3) experiences with other parents and carers; (4) new understandings; (5) changes in behaviours; and (6) future suggestions. Discussion: Parents and carers who attended a dedicated DBT skills groups, adapted for local needs, reported improvements in their wellbeing, as well as interactions with their adolescents and more general family functioning, by the end of the intervention. Further studies are needed which report on caregiver involvement in DBT. Full article
(This article belongs to the Special Issue 2nd Edition of Parental Attachment and Adolescent Well-Being)
Show Figures

Figure 1

Figure 1
<p>Themes and subthemes from the semi-structured interviews.</p>
Full article ">
29 pages, 23181 KiB  
Article
SAOCNN: Self-Attention and One-Class Neural Networks for Hyperspectral Anomaly Detection
by Jinshen Wang, Tongbin Ouyang, Yuxiao Duan and Linyan Cui
Remote Sens. 2022, 14(21), 5555; https://doi.org/10.3390/rs14215555 - 3 Nov 2022
Cited by 2 | Viewed by 2349
Abstract
Hyperspectral anomaly detection is a popular research direction for hyperspectral images; however, it is problematic because it separates the background and anomaly without prior target information. Currently, deep neural networks are used as an extractor to mine intrinsic features in hyperspectral images, which [...] Read more.
Hyperspectral anomaly detection is a popular research direction for hyperspectral images; however, it is problematic because it separates the background and anomaly without prior target information. Currently, deep neural networks are used as an extractor to mine intrinsic features in hyperspectral images, which can be fed into separate anomaly detection methods to improve their performances. However, this hybrid approach is suboptimal because the subsequent detector is unable to drive the data representation in hidden layers, which makes it a challenge to maximize the capabilities of deep neural networks when extracting the underlying features customized for anomaly detection. To address this issue, a novel unsupervised, self-attention-based, one-class neural network (SAOCNN) is proposed in this paper. SAOCNN consists of two components: a novel feature extraction network and a one-class SVM (OC-SVM) anomaly detection method, which are interconnected and jointly trained by the OC-SVM-like loss function. The adoption of co-training updates the feature extraction network together with the anomaly detector, thus improving the whole network’s detection performance. Considering that the prominent feature of an anomaly lies in its difference from the background, we designed a deep neural extraction network to learn more comprehensive hyperspectral image features, including spectral, global correlation, and local spatial features. To accomplish this goal, we adopted an adversarial autoencoder to produce the residual image with highlighted anomaly targets and a suppressed background, which is input into an improved non-local module to adaptively select the useful global information in the whole deep feature space. In addition, we incorporated a two-layer convolutional network to obtain local features. SAOCNN maps the original hyperspectral data to a learned feature space with better anomaly separation from the background, making it possible for the hyperplane to separate them. Our experiments on six public hyperspectral datasets demonstrate the state-of-the-art performance and superiority of our proposed SAOCNN when extracting deep potential features, which are more conducive to anomaly detection. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

Figure 1
<p>Flow chart of proposed method for hyperspectral anomaly detection consisting of two parts: the feature extraction network, which includes AAE, rNon-local, and Local feature learning Net, and the anomaly detection network as OCSVM-Net. <math display="inline"><semantics> <msup> <mi mathvariant="bold-italic">X</mi> <mi>c</mi> </msup> </semantics></math> is the original 3D hyperspectral matrix reshaped with <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mi>H</mi> <mo>×</mo> <mi>W</mi> </mrow> </semantics></math> pixels and <span class="html-italic">k</span> bands. Here, superscript <span class="html-italic">c</span> only serves as a symbol to represent the 3D matrix with no practical meaning. Same applies to reconstructed image <math display="inline"><semantics> <msup> <mover accent="true"> <mi mathvariant="bold-italic">X</mi> <mo>^</mo> </mover> <mi>c</mi> </msup> </semantics></math> and residual image <math display="inline"><semantics> <msup> <mi mathvariant="bold-italic">R</mi> <mi>c</mi> </msup> </semantics></math>.</p>
Full article ">Figure 2
<p>Reconstruction process of AE in hyperspectral data. (<b>a</b>) Reconstruction of anomaly. (<b>b</b>) Reconstruction of background.</p>
Full article ">Figure 3
<p>Reconstruction process of AAE in hyperspectral data.</p>
Full article ">Figure 4
<p>Details of network and loss functions based on AAE. Huber loss is provided for the encoder–decoder, and Wgan-gp loss is provided for the discriminator.</p>
Full article ">Figure 5
<p>(<b>a</b>) Original non-local network with input of <math display="inline"><semantics> <mi mathvariant="bold-italic">X</mi> </semantics></math>. (<b>b</b>) Improved rNon-local in proposed method with input of <math display="inline"><semantics> <mi mathvariant="bold-italic">X</mi> </semantics></math> and residual <math display="inline"><semantics> <mi mathvariant="bold-italic">R</mi> </semantics></math>.</p>
Full article ">Figure 6
<p>Anomaly detection maps of SAOCNN, SAOCNN_NS, HADGAN, LREN, LSAD, CRD, and GRX. (<b>a</b>) Coast. (<b>b</b>) Pavia. (<b>c</b>) DC Mall. (<b>d</b>) HYDICE. (<b>e</b>) Salinas. (<b>f</b>) San Diego.</p>
Full article ">Figure 7
<p>ROC of <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi>P</mi> <mi>d</mi> </msub> <mo>,</mo> <msub> <mi>P</mi> <mi>f</mi> </msub> <mo>)</mo> </mrow> </semantics></math> for SAOCNN, SAOCNN_NS, HADGAN, LREN, LSAD, CRD, and GRX.</p>
Full article ">Figure 8
<p>ROC of <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi>P</mi> <mi>f</mi> </msub> <mo>,</mo> <mi>τ</mi> <mo>)</mo> </mrow> </semantics></math> for SAOCNN, SAOCNN_NS, HADGAN, LREN, LSAD, CRD, and GRX.</p>
Full article ">Figure 9
<p>Box plots for SAOCNN, SAOCNN_NS, HADGAN, LREN, LSAD, CRD, and GRX.</p>
Full article ">Figure 10
<p>Anomaly detection maps of ablation study. (<b>a</b>) Coast. (<b>b</b>) Pavia. (<b>c</b>) DC Mall. (<b>d</b>) HYDICE. (<b>e</b>) Salinas (<b>f</b>) San Diego.</p>
Full article ">Figure 11
<p>ROC of <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi>P</mi> <mi>d</mi> </msub> <mo>,</mo> <msub> <mi>P</mi> <mi>f</mi> </msub> <mo>)</mo> </mrow> </semantics></math> for ablation study.</p>
Full article ">Figure 12
<p>Box plot of ablation study.</p>
Full article ">Figure 13
<p>ROC of <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi>P</mi> <mi>d</mi> </msub> <mo>,</mo> <msub> <mi>P</mi> <mi>f</mi> </msub> <mo>)</mo> </mrow> </semantics></math> for training study.</p>
Full article ">Figure 14
<p>Box plot of training study.</p>
Full article ">
21 pages, 28793 KiB  
Article
KASiam: Keypoints-Aligned Siamese Network for the Completion of Partial TLS Point Clouds
by Xinpu Liu, Yanxin Ma, Ke Xu, Ling Wang and Jianwei Wan
Remote Sens. 2022, 14(15), 3617; https://doi.org/10.3390/rs14153617 - 28 Jul 2022
Viewed by 1707
Abstract
Completing point clouds from partial terrestrial laser scannings (TLS) is a fundamental step for many 3D visual applications, such as remote sensing, digital city and autonomous driving. However, existing methods mainly followed an ordinary auto-encoder architecture with only partial point clouds as inputs, [...] Read more.
Completing point clouds from partial terrestrial laser scannings (TLS) is a fundamental step for many 3D visual applications, such as remote sensing, digital city and autonomous driving. However, existing methods mainly followed an ordinary auto-encoder architecture with only partial point clouds as inputs, and adopted K-Nearest Neighbors (KNN) operations to extract local geometric features, which takes insufficient advantage of input point clouds and has limited ability to extract features from long-range geometric relationships, respectively. In this paper, we propose a keypoints-aligned siamese (KASiam) network for the completion of partial TLS point clouds. The network follows a novel siamese auto-encoder architecture, to learn prior geometric information of complete shapes by aligning keypoints of complete-partial pairs during the stage of training. Moreover, we propose two essential blocks cross-attention perception (CAP) and self-attention augment (SAA), which replace KNN operations with attention mechanisms and are able to establish long-range geometric relationships among points by selecting neighborhoods adaptively at the global level. Experiments are conducted on widely used benchmarks and several TLS data, which demonstrate that our method outperforms other state-of-the-art methods by a 4.72% reduction of the average Chamfer Distance of categories in PCN dataset at least, and can generate finer shapes of point clouds on partial TLS data. Full article
(This article belongs to the Special Issue New Perspectives on 3D Point Cloud)
Show Figures

Figure 1

Figure 1
<p>The overall architecture of KASiam based on a siamese auto-encoder structure. For clarity, we add symbols <math display="inline"><semantics> <mover accent="true"> <mspace width="1.em"/> <mo>^</mo> </mover> </semantics></math> to the variables and modules on the reconstruction path.</p>
Full article ">Figure 2
<p>The structure of the feature extractor module. Here, USC is an abbreviation of the umbrella surface constructor proposed by RepSurf [<a href="#B18-remotesensing-14-03617" class="html-bibr">18</a>] and <span class="html-italic">©</span> denotes a concatenation operation.</p>
Full article ">Figure 3
<p>The structure of the keypoint generator module. Here, ConvT denotes a convtranspose operation and FPS denotes the farthest point sampling [<a href="#B16-remotesensing-14-03617" class="html-bibr">16</a>] operation.</p>
Full article ">Figure 4
<p>The structure of the shape refiner module. Here, <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mn>2</mn> </msub> <mo>=</mo> <msub> <mi>N</mi> <mn>1</mn> </msub> <mo>×</mo> <msub> <mi>m</mi> <mn>1</mn> </msub> <mo>×</mo> <msub> <mi>m</mi> <mn>2</mn> </msub> <mo>×</mo> <msub> <mi>m</mi> <mn>3</mn> </msub> </mrow> </semantics></math>. It is worth noting that the point number of <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">P</mi> <mi mathvariant="bold-italic">out</mi> </msub> </semantics></math> is not fixed, and the selection of parameters <math display="inline"><semantics> <mrow> <msub> <mi>m</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>m</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>m</mi> <mn>3</mn> </msub> </mrow> </semantics></math> depends on different point cloud completion tasks.</p>
Full article ">Figure 5
<p>The structure of the blocks of cross-attention perception and self-attention augment. Here, BN is a layer of batch normalization, <span class="html-italic">©</span> denotes a concatenation operation, and <math display="inline"><semantics> <mrow> <mo>×</mo> <mi>m</mi> </mrow> </semantics></math> means increasing the feature dimension of the input matrix to <span class="html-italic">m</span> times by layers of MLP.</p>
Full article ">Figure 6
<p>The structure of the block of channel attention. Here, <math display="inline"><semantics> <mi>FC</mi> </semantics></math> denotes a fully connected layer and ⊙ denotes a dot-product operation.</p>
Full article ">Figure 7
<p>Visual comparison of point cloud completion on the PCN dataset. Our method can generate objects with smoother surfaces, more detailed shapes and more semantic structures. For clarity, we mark the enlarged detail structures with red boxes.</p>
Full article ">Figure 8
<p>Visual comparison of point cloud completion on the TLS data. Our method can mitigate the impact of the data density distribution and generate more accurate shapes. For clarity, we mark the enlarged detail structures with red boxes.</p>
Full article ">Figure 9
<p>Visualization analyses of the attention mechanisms on point clouds. Here, query points are marked in green, and colors of points changing from red to blue indicate that query points are of higher to lower attention to the corresponding regions. (<b>a</b>) Visual comparison of different neighborhood selection methods of query points between the traditional KNN (k = 64), Global-KNN [<a href="#B15-remotesensing-14-03617" class="html-bibr">15</a>] and KASiam (first CAA layer). (<b>b</b>) Visual comparison of the outputs of different CAA layers. (<b>c</b>) Visual comparison of the attentive regions of different query points (second CAA layer). (<b>d</b>) Results of attention mechanisms on more objects (first CAA layer).</p>
Full article ">Figure 10
<p>Visualization results of the Keypoints Alignment Operation. Here, we use small gray dots and large red dots to present input points and keypoints, respectively. (<b>a</b>) Visualization process of the keypoints alignment operation. For clarity, we mark the enlarged detail structures with blue boxes. (<b>b</b>) Results of the keypoints alignment operation on more objects.</p>
Full article ">Figure 11
<p>Visualization results of the process of point clouds completion by shape refiners. Here, after getting the keypoints <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">P</mi> <mi mathvariant="bold-italic">key</mi> </msub> </semantics></math>, we send it to two tandem shape refiners to generate complete point clouds step by step (<math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">P</mi> <mi mathvariant="bold-italic">key</mi> </msub> <mo>→</mo> <msub> <mi mathvariant="bold-italic">P</mi> <mi mathvariant="bold-italic">mid</mi> </msub> <mo>→</mo> <msub> <mi mathvariant="bold-italic">P</mi> <mi mathvariant="bold-italic">fine</mi> </msub> </mrow> </semantics></math>).</p>
Full article ">Figure 12
<p>Visualization of the three typical faulty modes in the PCN dataset, which are mishandling rare shapes (plane), losing detailed structures (cabinet) and mishandling numerous and detached parts (lamp).</p>
Full article ">Figure A1
<p>The complete and detailed architecture of KASiam.</p>
Full article ">
20 pages, 2677 KiB  
Article
An Efficient Sparse Bayesian Learning STAP Algorithm with Adaptive Laplace Prior
by Weichen Cui, Tong Wang, Degen Wang and Kun Liu
Remote Sens. 2022, 14(15), 3520; https://doi.org/10.3390/rs14153520 - 22 Jul 2022
Cited by 8 | Viewed by 1731
Abstract
Space-time adaptive processing (STAP) encounters severe performance degradation with insufficient training samples in inhomogeneous environments. Sparse Bayesian learning (SBL) algorithms have attracted extensive attention because of their robust and self-regularizing nature. In this study, a computationally efficient SBL STAP algorithm with adaptive Laplace [...] Read more.
Space-time adaptive processing (STAP) encounters severe performance degradation with insufficient training samples in inhomogeneous environments. Sparse Bayesian learning (SBL) algorithms have attracted extensive attention because of their robust and self-regularizing nature. In this study, a computationally efficient SBL STAP algorithm with adaptive Laplace prior is developed. Firstly, a hierarchical Bayesian model with adaptive Laplace prior for complex-value space-time snapshots (CALM-SBL) is formulated. Laplace prior enforces the sparsity more heavily than Gaussian, which achieves a better reconstruction of the clutter plus noise covariance matrix (CNCM). However, similar to other SBL-based algorithms, a large degree of freedom will bring a heavy burden to the real-time processing system. To overcome this drawback, an efficient localized reduced-dimension sparse recovery-based space-time adaptive processing (LRDSR-STAP) framework is proposed in this paper. By using a set of deeply weighted Doppler filters and exploiting prior knowledge of the clutter ridge, a novel localized reduced-dimension dictionary is constructed, and the computational load can be considerably reduced. Numerical experiments validate that the proposed method achieves better performance with significantly reduced computational complexity in limited snapshots scenarios. It can be found that the proposed LRDSR-CALM-STAP algorithm has the potential to be implemented in practical real-time processing systems. Full article
Show Figures

Figure 1

Figure 1
<p>Geometric configuration of an airborne surveillance radar.</p>
Full article ">Figure 2
<p>Graphical model of the proposed hierarchical Bayesian model with complex adaptive Laplace prior.</p>
Full article ">Figure 3
<p>Gaussian and Laplace log-distributions.</p>
Full article ">Figure 4
<p>Illustration of local grids selecting.</p>
Full article ">Figure 5
<p>Block diagram for LRDSR-CALM-SBL.</p>
Full article ">Figure 6
<p>SINR loss comparison of different SR-STAP methods.</p>
Full article ">Figure 7
<p>The average SINR loss against the number of samples.</p>
Full article ">Figure 8
<p>SINR loss comparison of different K.</p>
Full article ">Figure 9
<p>SINR loss comparison of different RD-STAP methods in four different cases. (<b>a</b>) Ideal case. (<b>b</b>) Spatial mismatch case. (<b>c</b>) Temporal mismatch case. (<b>d</b>) Ideal case when <span class="html-italic">β</span> = 0.5.</p>
Full article ">Figure 10
<p>The average SINR loss against the number of samples.</p>
Full article ">Figure 11
<p>Probability of detection versus target SNR.</p>
Full article ">
22 pages, 3870 KiB  
Article
Improved Boundary Support Vector Clustering with Self-Adaption Support
by Huina Li, Yuan Ping, Bin Hao, Chun Guo and Yujian Liu
Electronics 2022, 11(12), 1854; https://doi.org/10.3390/electronics11121854 - 11 Jun 2022
Cited by 2 | Viewed by 1464
Abstract
Concerning the good description of arbitrarily shaped clusters, collecting accurate support vectors (SVs) is critical yet resource-consuming for support vector clustering (SVC). Even though SVs can be extracted from the boundaries for efficiency, boundary patterns with too much noise and inappropriate parameter settings, [...] Read more.
Concerning the good description of arbitrarily shaped clusters, collecting accurate support vectors (SVs) is critical yet resource-consuming for support vector clustering (SVC). Even though SVs can be extracted from the boundaries for efficiency, boundary patterns with too much noise and inappropriate parameter settings, such as the kernel width, also confuse the connectivity analysis. Thus, we propose an improved boundary SVC (IBSVC) with self-adaption support for reasonable boundaries and comfortable parameters. The first self-adaption is in the movable edge selection (MES). By introducing a divide-and-conquer strategy with the k-means++ support, it collects local, informative, and reasonable edges for the minimal hypersphere construction while rejecting pseudo-borders and outliers. Rather than the execution of model learning with repetitive training and evaluation, we fuse the second self-adaption with the flexible parameter selection (FPS) for direct model construction. FPS automatically selects the kernel width to meet a conformity constraint, which is defined by measuring the difference between the data description drawn by the model and the actual pattern. Finally, IBSVC adopts a convex decomposition-based strategy to finish cluster checking and labeling even though there is no prior knowledge of the cluster number. Theoretical analysis and experimental results confirm that IBSVC can discover clusters with high computational efficiency and applicability. Full article
(This article belongs to the Topic Data Science and Knowledge Discovery)
Show Figures

Figure 1

Figure 1
<p>Framework of MES.</p>
Full article ">Figure 2
<p>Edge data collected by different boundary selection methods. On the five Gaussians with 1000 data samples equally distributed into fire clusters: (<b>a</b>) BEPS [<a href="#B15-electronics-11-01854" class="html-bibr">15</a>] collects edge data, with <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>30</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>0.8</mn> </mrow> </semantics></math>; (<b>b</b>) SBS [<a href="#B8-electronics-11-01854" class="html-bibr">8</a>] finds edges, with <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>30</mn> <mo>,</mo> <msub> <mi>γ</mi> <mi>l</mi> </msub> <mo>=</mo> <mn>0.8</mn> <mo>,</mo> <msub> <mi>γ</mi> <mi>u</mi> </msub> <mo>=</mo> <mn>0.9</mn> </mrow> </semantics></math>; (<b>c</b>) Convergence direction description based on the results of SBS; (<b>d</b>) MBS obtains new edges not in the original dataset by allowing edges to move.</p>
Full article ">Figure 3
<p>Fake edge analysis for elimination based on the convergence directions of a data sample’s five nearest neighbors. The data set used is Chameleon [<a href="#B16-electronics-11-01854" class="html-bibr">16</a>], with eight irregular clusters after noise elimination by [<a href="#B17-electronics-11-01854" class="html-bibr">17</a>]. Here, MBS is conducted, with <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>k</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>30</mn> <mo>,</mo> <msub> <mi>γ</mi> <mi>l</mi> </msub> <mo>=</mo> <mn>0.85</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mi>γ</mi> <mi>u</mi> </msub> <mo>=</mo> <mn>0.95</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Edges collected by BEPS [<a href="#B15-electronics-11-01854" class="html-bibr">15</a>], SBS of BSVC [<a href="#B8-electronics-11-01854" class="html-bibr">8</a>], and MES of IBSVC on DS3 provided by [<a href="#B16-electronics-11-01854" class="html-bibr">16</a>]. (<b>a</b>) BEPS: <math display="inline"><semantics> <mrow> <msub> <mi>k</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>30</mn> <mo>,</mo> <mi>γ</mi> <mo>=</mo> <mn>0.8</mn> </mrow> </semantics></math>. (<b>b</b>) BSVC: <math display="inline"><semantics> <mrow> <msub> <mi>k</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>30</mn> <mo>,</mo> <msub> <mi>γ</mi> <mi>l</mi> </msub> <mo>=</mo> <mn>0.8</mn> <mo>,</mo> <msub> <mi>γ</mi> <mi>u</mi> </msub> <mo>=</mo> <mn>0.95</mn> </mrow> </semantics></math>. (<b>c</b>) MES: <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>20</mn> <mo>,</mo> <msub> <mi>k</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>30</mn> <mo>,</mo> </mrow> </semantics></math><math display="inline"><semantics> <mrow> <msub> <mi>k</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>5</mn> <mo>,</mo> <msub> <mi>γ</mi> <mi>l</mi> </msub> <mo>=</mo> <mn>0.8</mn> <mo>,</mo> <msub> <mi>γ</mi> <mi>u</mi> </msub> <mo>=</mo> <mn>0.95</mn> <mo>,</mo> <msub> <mi>τ</mi> <mi>f</mi> </msub> <mo>=</mo> <mn>0.25</mn> </mrow> </semantics></math>. (<b>d</b>) The average run-time cost in seconds.</p>
Full article ">Figure 5
<p>Accuracies of different <math display="inline"><semantics> <mi>β</mi> </semantics></math> initial strategies and # of iterations.</p>
Full article ">Figure 6
<p>Improvements over <span class="html-italic">k</span>-means++ made by IBSVC.</p>
Full article ">Figure 6 Cont.
<p>Improvements over <span class="html-italic">k</span>-means++ made by IBSVC.</p>
Full article ">
23 pages, 5870 KiB  
Article
A Self-Adaptive Optimization Individual Tree Modeling Method for Terrestrial LiDAR Point Clouds
by Zhenyang Hui, Zhaochen Cai, Bo Liu, Dajun Li, Hua Liu and Zhuoxuan Li
Remote Sens. 2022, 14(11), 2545; https://doi.org/10.3390/rs14112545 - 26 May 2022
Cited by 7 | Viewed by 2102
Abstract
Individual tree modeling for terrestrial LiDAR point clouds always involves heavy computation burden and low accuracy toward a complex tree structure. To solve these problems, this paper proposed a self-adaptive optimization individual tree modeling method. In this paper, we first proposed a joint [...] Read more.
Individual tree modeling for terrestrial LiDAR point clouds always involves heavy computation burden and low accuracy toward a complex tree structure. To solve these problems, this paper proposed a self-adaptive optimization individual tree modeling method. In this paper, we first proposed a joint neighboring growing method to segment wood points into object primitives. Subsequently, local object primitives were optimized to alleviate the computation burden. To build the topology relation among branches, branches were separated based on spatial connectivity analysis. And then the nodes corresponding to each object primitive were adopted to construct the graph structure of the tree. Furthermore, each object primitive was fitted as a cylinder. To revise the local abnormal cylinder, a self-adaptive optimization method based on the constructed graph structure was proposed. Finally, the constructed tree model was further optimized globally based on prior knowledge. Twenty-nine field datasets obtained from three forest sites were adopted to evaluate the performance of the proposed method. The experimental results show that the proposed method can achieve satisfying individual tree modeling accuracy. The mean volume deviation of the proposed method is 1.427 m3. In the comparison with two other famous tree modeling methods, the proposed method can achieve the best individual tree modeling result no matter which accuracy indicator is selected. Full article
(This article belongs to the Special Issue 3D Point Clouds in Forest Remote Sensing II)
Show Figures

Figure 1

Figure 1
<p>Flowchart of the proposed method.</p>
Full article ">Figure 2
<p>Neighbors growing based on joint <span class="html-italic">K</span>-neighbors and fixed radius. (<b>a</b>) Neighbors growing based on <span class="html-italic">K</span>-neighbors; (<b>b</b>) neighbors growing under the constraint of the fixed radius. In (<b>a</b>), blue points are the starting points, while orange points are the growing points using <span class="html-italic">K</span> nearest neighbors. In (<b>b</b>), green points are the final growing points for the blue points under the constraint of a fixed radius. Red points are the excluded points.</p>
Full article ">Figure 3
<p>Branches separation based on the spatial connectivity of neighboring points set. <math display="inline"><semantics> <mrow> <mrow> <mo>{</mo> <mrow> <mi>S</mi> <mi>e</mi> <msub> <mi>t</mi> <mi>i</mi> </msub> </mrow> <mo>}</mo> </mrow> </mrow> </semantics></math> indicates the neighboring point set. Li represents the separated branches. Si along with the dotted line means the spatial connectivity analysis to the neighboring point set. Pi is the location where connectivity changes. Arrow means the growing direction.</p>
Full article ">Figure 4
<p>Spatial connectivity analysis to the neighboring point set. (<b>a</b>) The number of connected component is 1; (<b>b</b>) the number of connected components is 2.</p>
Full article ">Figure 5
<p>Local object primitive self-adaptive constraint adjustment. (<b>a</b>) Segmented object primitives based on the joint neighboring growing; (<b>b</b>) result of object primitive self-adaptive adjustment. Different primitives in (<b>a</b>,<b>b</b>) are rendered in different colors.</p>
Full article ">Figure 6
<p>Sketch map of neighboring point sets fusing.</p>
Full article ">Figure 7
<p>Graph structure for the tree.</p>
Full article ">Figure 8
<p>Abnormal fitting for local object primitive. (<b>a</b>) Local abnormal fitted diameter; (<b>b</b>) abnormal fitted cylinder. In (<b>a</b>), it can be found that R1 is obvious larger than R2. The red frame represents the abnormal cylinder.</p>
Full article ">Figure 9
<p>Variation in diameter of branches under natural conditions. (<b>a</b>) Variation in diameter of one branch; (<b>b</b>) variation in diameter of different branches.</p>
Full article ">Figure 10
<p>Branch model comparison before and after local optimization. (<b>a</b>) Branch model before optimization; (<b>b</b>) branch model after optimization. R1, R2, R3, and R4 are the diameters of the fitted cylinders.</p>
Full article ">Figure 11
<p>Self-revision of branch crosses. (<b>a</b>) Local abnormal cylinder; (<b>b</b>) cylinder after optimization; (<b>c</b>) shortest path analysis based on the constructed graph structure; (<b>d</b>) branch crosses after self-revision. In (<b>a</b>), ‘A’ represent the abnormal cylinder. In (<b>b</b>), ‘A’’ represents the optimized cylinder. In (<b>d</b>), ‘B’’ and ‘C’’ represent the cylinders filled the gap.</p>
Full article ">Figure 12
<p>Tree samples for the three forest sites. (<b>a</b>–<b>d</b>) tree samples from Peruvian site; (<b>e</b>–<b>h</b>) tree samples from Indonesian site; (<b>i</b>–<b>l</b>) tree samples from Guyanese site.</p>
Full article ">Figure 13
<p>Volume deviation for each sample.</p>
Full article ">Figure 14
<p>Tree height deviation for each sample.</p>
Full article ">Figure 15
<p>DBH deviation for each sample.</p>
Full article ">Figure 16
<p>QSM volume comparison among trees with different DBH and tree height. (<b>a</b>) QSM volume comparison among trees with DBH smaller than 70 cm and larger than 70 cm. (<b>b</b>) QSM volume comparison among trees with height lower than 30 m and higher than 30 m.</p>
Full article ">Figure 16 Cont.
<p>QSM volume comparison among trees with different DBH and tree height. (<b>a</b>) QSM volume comparison among trees with DBH smaller than 70 cm and larger than 70 cm. (<b>b</b>) QSM volume comparison among trees with height lower than 30 m and higher than 30 m.</p>
Full article ">Figure 17
<p>Regression analysis between the built QSM volume and harvested volume. (<b>a</b>) Regression analysis for TreeQSM; (<b>b</b>) regression analysis for AdQSM; (<b>c</b>) regression analysis for ProposedQSM.</p>
Full article ">Figure 17 Cont.
<p>Regression analysis between the built QSM volume and harvested volume. (<b>a</b>) Regression analysis for TreeQSM; (<b>b</b>) regression analysis for AdQSM; (<b>c</b>) regression analysis for ProposedQSM.</p>
Full article ">Figure 18
<p>Volume deviations of the 29 samples for the three methods.</p>
Full article ">
35 pages, 1489 KiB  
Review
On The Biophysical Complexity of Brain Dynamics: An Outlook
by Nandan Shettigar, Chun-Lin Yang, Kuang-Chung Tu and C. Steve Suh
Dynamics 2022, 2(2), 114-148; https://doi.org/10.3390/dynamics2020006 - 5 May 2022
Cited by 7 | Viewed by 5577
Abstract
The human brain is a complex network whose ensemble time evolution is directed by the cumulative interactions of its cellular components, such as neurons and glia cells. Coupled through chemical neurotransmission and receptor activation, these individuals interact with one another to varying degrees [...] Read more.
The human brain is a complex network whose ensemble time evolution is directed by the cumulative interactions of its cellular components, such as neurons and glia cells. Coupled through chemical neurotransmission and receptor activation, these individuals interact with one another to varying degrees by triggering a variety of cellular activity from internal biological reconfigurations to external interactions with other network agents. Consequently, such local dynamic connections mediating the magnitude and direction of influence cells have on one another are highly nonlinear and facilitate, respectively, nonlinear and potentially chaotic multicellular higher-order collaborations. Thus, as a statistical physical system, the nonlinear culmination of local interactions produces complex global emergent network behaviors, enabling the highly dynamical, adaptive, and efficient response of a macroscopic brain network. Microstate reconfigurations are typically facilitated through synaptic and structural plasticity mechanisms that alter the degree of coupling (magnitude of influence) neurons have upon each other, dictating the type of coordinated macrostate emergence in populations of neural cells. These can emerge in the form of local regions of synchronized clusters about a center frequency composed of individual neural cell collaborations as a fundamental form of collective organization. A single mode of synchronization is insufficient for the computational needs of the brain. Thus, as neural components influence one another (cellular components, multiple clusters of synchronous populations, brain nuclei, and even brain regions), different patterns of neural behavior interact with one another to produce an emergent spatiotemporal spectral bandwidth of neural activity corresponding to the dynamical state of the brain network. Furthermore, hierarchical and self-similar structures support these network properties to operate effectively and efficiently. Neuroscience has come a long way since its inception; however, a comprehensive and intuitive understanding of how the brain works is still amiss. It is becoming evident that any singular perspective upon the grandiose biophysical complexity within the brain is inadequate. It is the purpose of this paper to provide an outlook through a multitude of perspectives, including the fundamental biological mechanisms and how these operate within the physical constraints of nature. Upon assessing the state of prior research efforts, in this paper, we identify the path future research effort should pursue to inspire progress in neuroscience. Full article
Show Figures

Figure 1

Figure 1
<p>A simplified illustration of the synapse. Magnitude of interaction is determined by concentration of neurotransmitters and cumulative availability of receptors. Direction of interactions (excitatory or inhibitory) is typically controlled by the type of neurotransmitter released. Thus, factors that influence these parameters control synaptic strength. As synapses are housed on axonal and dendritic structures, their properties also have significant influence on synaptic strength. Reproduced with permission [<a href="#B115-dynamics-02-00006" class="html-bibr">115</a>].</p>
Full article ">Figure 2
<p>Myelin sheath distribution by oligodendrocytes on axons to speed up action potential conduction. Adaptive myelination, by controlling distribution of myelin, confers larger-scale modulation of signal transmission dynamics, as opposed to synaptic plasticity mechanisms. Activity-dependent control of myelin distribution along white matter tracts of the brain (connecting different regions) temporally modulates signal transmission, resulting in reconfiguration of connections between larger-scale brain regions. Reproduced from Shutterstock [<a href="#B123-dynamics-02-00006" class="html-bibr">123</a>].</p>
Full article ">Figure 3
<p>Simplified representation of synchronized clusters of nodes. Multiple synchronous modes representing different amplified and stable forms of information capable of influencing and being influenced by one another. The nonlinear culmination of such interactions composes the overall ensemble of brain dynamics. Reproduced with permission from Russo et al. [<a href="#B163-dynamics-02-00006" class="html-bibr">163</a>].</p>
Full article ">
34 pages, 8539 KiB  
Article
In-Company Smart Charging: Development of a Simulation Model to Facilitate a Smart EV Charging System
by Mike F. Voss, Steven P. Haveman and Gerrit Maarten Bonnema
Energies 2021, 14(20), 6723; https://doi.org/10.3390/en14206723 - 15 Oct 2021
Cited by 1 | Viewed by 3529
Abstract
Current electric vehicle (EV) charging systems have limited smart functionality, and most research focuses on load-balancing the national or regional grid. In this article, we focus on supporting the early design of a smart charging system that can effectively and efficiently charge a [...] Read more.
Current electric vehicle (EV) charging systems have limited smart functionality, and most research focuses on load-balancing the national or regional grid. In this article, we focus on supporting the early design of a smart charging system that can effectively and efficiently charge a company’s EV fleet, maximizing the use of self-generated Photo-Voltaic energy. The support takes place in the form of the Vehicle Charging Simulation (VeCS) model. System performance is determined by operational costs, CO2 emissions and employee satisfaction. Two impactful smart charging functions concern adaptive charging speeds and charging point management. Simulation algorithms for these functions are developed. The VeCS model is developed to simulate implementation of a smart charging system incorporating both charging infrastructure and local Photo-Voltaics input, using a company’s travel and energy data, prior to having the EVs in place. The model takes into account travel behaviour, energy input and energy consumption on a daily basis. The model shows the number of charged vehicles, whether incomplete charges occur, and energy flow during the day. The model also facilitates simulation of an entire year to determine overall cost and emission benefits. It also estimates charging costs and CO2 emissions that can be compared to the non-EV situation. With the VeCS model, the impact of various system design and implementation choices can be explored before EVs are used. Two system designs are proposed for the case company; a short-term version with current technology and a future version with various smart functionalities. Overall, the model can contribute to substantiated advice for a company regarding implementation of charging infrastructure. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Nine-window diagram placing EV technology in its temporal and hierarchical context, showing particular elements that change over time and affect EV capabilities. Developed from the model in [<a href="#B1-energies-14-06723" class="html-bibr">1</a>].</p>
Full article ">Figure 2
<p>EV charging with a Mode 3 charging point. Energy is converted to DC by the charger inside the EV, restricting charge speeds. Note that the Control Electronics (CE) of the EVSE and EV communicate for security checks only.</p>
Full article ">Figure 3
<p>EV charging with a Mode 4 charging point. Energy is converted to DC before being sent to the EV, allowing for high charge speeds. CEs of both the EVSE and EV still communicate for security checks.</p>
Full article ">Figure 4
<p>Histogram of duration (hours) of single parking instance at a company location.</p>
Full article ">Figure 5
<p>Chance of a car being parked at a company location at a certain time of day, for all employee categories.</p>
Full article ">Figure 6
<p>Logical Influence Diagram showing inputs and their relations. Solid lines represent main influences, dotted lines stand for additional influences.</p>
Full article ">Figure 7
<p>VeCS model inputs and outputs.</p>
Full article ">Figure 8
<p>Simulation steps for an entire year or a single day with 15 min intervals.</p>
Full article ">Figure 9
<p>User Interface of the Vehicle Charging Simulation model (VeCS) within Microsoft Excel. The parts are as follows: (<b>A</b>) General input including company location size and travel parameters. Note the button to generate travel data. Furthermore, note the seed for reproducible random data generation. (<b>B</b>) Date/period selection. (<b>C</b>) Variables on energy consumption and price. (<b>D</b>) Charging strategy selection (see <a href="#sec5dot1-energies-14-06723" class="html-sec">Section 5.1</a>). (<b>E</b>) Parking/connection management strategy selection (see <a href="#sec5dot2-energies-14-06723" class="html-sec">Section 5.2</a>). (<b>F</b>) The button to start the simulations. (<b>G</b>) Results in numbers. (<b>H</b>) Left, graph with single-day charge behaviour for a single (selected) EV, and right, overview with single day results for all simulated EVs: green, fully charged; yellow, partially charged but sufficient for the next trip; red, insufficient charge. (<b>I</b>) Overall power use. (<b>J</b>) Save results.</p>
Full article ">Figure 10
<p>Energy patterns during a summer day. Shows the total charging power (grey) and power export/import (green/red).</p>
Full article ">Figure 11
<p>Nine-windows diagram placing the system in its temporal and hierarchical context, showing particular elements that affect the system and how they change over time. Developed from the model in [<a href="#B1-energies-14-06723" class="html-bibr">1</a>].</p>
Full article ">Figure 12
<p>Data and their sources used in the short-term system design.</p>
Full article ">Figure 13
<p>Function allocation to different parts of the short-term system. The back-office manages the database and runs on a server. The hub EVSE manages charging speeds for itself and all its satellite EVSE. Note that some of the static data in <a href="#energies-14-06723-f012" class="html-fig">Figure 12</a> such as the contracted power and number of charging points are omitted. These do not change and can simply be set once when the system is installed.</p>
Full article ">Figure 14
<p>Data and their sources used in the future system design.</p>
Full article ">Figure 15
<p>Function allocation to different parts of the future system. The back office manages the database and runs on a server.</p>
Full article ">Figure 16
<p>Parking times of a day at Royal Reesink BV in Apeldoorn. Each blue bar represents an employee who has parked their car.</p>
Full article ">Figure 17
<p>Charging patterns of one day of charging at Royal Reesink BV in Apeldoorn. Shown are the import from (red) and export to (green) the grid, as well as the contracted maximum (yellow) and power demand of all EVSE combined (grey).</p>
Full article ">
15 pages, 1132 KiB  
Article
Aligning Resilience and Wellbeing Outcomes for Locally-Led Adaptation in Tanzania
by Emilie Beauchamp, Nigel C. Sainsbury, Sam Greene and Tomas Chaigneau
Sustainability 2021, 13(16), 8976; https://doi.org/10.3390/su13168976 - 11 Aug 2021
Cited by 7 | Viewed by 3150
Abstract
Interventions to address climate adaptation have been on the rise over the past decade. Intervention programmes aim to build the resilience of local communities to climate shocks, and ultimately their wellbeing by helping them to better prepare, adapt and recover. Resilience, similar to [...] Read more.
Interventions to address climate adaptation have been on the rise over the past decade. Intervention programmes aim to build the resilience of local communities to climate shocks, and ultimately their wellbeing by helping them to better prepare, adapt and recover. Resilience, similar to human wellbeing, is a multidimensional construct grounded in local realities and lived experiences. Yet current evaluation frameworks used in resilience programming rarely consider what resilience means in local contexts prior to implementation. This means policy designs risk failing to improve resilience of communities and creating unintended negative consequences for communities’ wellbeing. Better processes and indicators for assessing resilience are needed. This paper explores the interplay between local predictors of resilience and wellbeing to assess the validity of self-assessed indicators as part of frameworks to measure resilience. We draw from research on the Devolved Climate Finance (DCF) mechanism implemented between 2014 and 2018 in Tanzania. We find that different factors explain resilience when compared to wellbeing; while resilience is primarily influenced by relationships, wellbeing is correlated with livelihoods. This shows that incentives to improve resilience differ from those of wellbeing. Climate and development practitioners must adopt locally grounded framings for resilience and wellbeing to ensure interventions track appropriate indicators, towards positive outcomes. Full article
Show Figures

Figure 1

Figure 1
<p>A project map Monduli, Longido and Ngorongoro in Northern Tanzania [<a href="#B50-sustainability-13-08976" class="html-bibr">50</a>].</p>
Full article ">
19 pages, 4617 KiB  
Article
Hyper-Heuristic Capacitance Array Method for Multi-Metal Wear Debris Detection
by Yanshan Sun, Lecheng Jia and Zhoumo Zeng
Sensors 2019, 19(3), 515; https://doi.org/10.3390/s19030515 - 26 Jan 2019
Cited by 15 | Viewed by 2770
Abstract
Online detection of fatigued wear debris in the lubricants of aero-engines can provide warning of engine failure during flight, thus having great economic and social benefits. In this paper, we propose a capacitance array sensor and a hyper-heuristic partial differential equation (PDE) inversion [...] Read more.
Online detection of fatigued wear debris in the lubricants of aero-engines can provide warning of engine failure during flight, thus having great economic and social benefits. In this paper, we propose a capacitance array sensor and a hyper-heuristic partial differential equation (PDE) inversion method for detecting multiple micro-scale metal debris, combined with self-adaptive cellular genetic (SA-CGA) and morphological algorithms. Firstly, different from the traditional methods, which are limited in multi-induction-Dirac-boundary-inversion, a mathematical model with non-local boundary conditions is established. Furthermore, a hyper-heuristic method based on prior knowledge is also proposed to extract the wear character. Moreover, a 12-plate array circulating sensor and corresponding detection system are designed. The experimental results were compared with the optical microscopy. The results show that under the conditions of 1~3 wear debris with diameters of between 250–900 μm, the accuracy of the proposed method is 10–38% higher than those of the traditional methods. The recognition error of the wear debris counts decreases to 0. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Mechanism of capacitance detection of multi-metal wear debris.</p>
Full article ">Figure 2
<p>Detection of multi-wear debris capacitance array.</p>
Full article ">Figure 3
<p>Simulation results of sensitivity field distribution. (<b>a</b>) Non-debris sensitivity fields adjacent to the plate. (<b>b</b>) Non-debris sensitivity field far from the plate. (<b>c</b>) 3D finite element models of multiple debris sources. (<b>d</b>) Abrasive induction field profile. (<b>e</b>) Multi-debris adjacent plate sensitivity field. (<b>f</b>) Multi-debris adjacent plate sensitivity field profile. (<b>g</b>) Multi-debris sensitivity field far from the plate. (<b>h</b>) Multi-debris sensitivity field profile far from the plate.</p>
Full article ">Figure 4
<p>Wear debris Resolution with Different Distributions.</p>
Full article ">Figure 5
<p>SA-CGA inversion results: (<b>a</b>) Single debris (250 μm). (<b>b</b>) Two debris. (<b>c</b>) Three debris (2 big and 1 small). (<b>d</b>) Three debris (1 big and 2 small).</p>
Full article ">Figure 6
<p>Different operators on wear debris edge hyper-heuristic detection (<b>a</b>) Sobel operator. (<b>b</b>) Prewitt operator. (<b>c</b>) LOG operator. (<b>d</b>) Canny operator.</p>
Full article ">Figure 7
<p>Abrasive hyper-heuristic skeleton extraction. (<b>a</b>) Direct skeleton extraction of wear debris. (<b>b</b>) Mathematical morphology hyper-heuristic skeleton extraction.</p>
Full article ">Figure 8
<p>Experimental set-up. (<b>a</b>) Detection system in the experiment. (<b>b</b>) 3-D model of capacitance array sensor.</p>
Full article ">Figure 8 Cont.
<p>Experimental set-up. (<b>a</b>) Detection system in the experiment. (<b>b</b>) 3-D model of capacitance array sensor.</p>
Full article ">Figure 9
<p>Debris imaging of CCD and capacitance array inversion methods. (<b>a</b>) Single 200 μm debris CCD image. (<b>c</b>) Single 500 μm debris CCD image. (<b>e</b>) Two small debris (separated from each other) CCD images. (<b>g</b>) Two adjacent debris (1 big and 1 small) CCD image. (<b>i</b>) Three debris (separated from each other) CCD image. (<b>k</b>) Three adjacent debris CCD image. (<b>b</b>,<b>d</b>,<b>f</b>,<b>h</b>,<b>j</b>,<b>l</b>) Corresponding capacitance array inversion.</p>
Full article ">Figure 9 Cont.
<p>Debris imaging of CCD and capacitance array inversion methods. (<b>a</b>) Single 200 μm debris CCD image. (<b>c</b>) Single 500 μm debris CCD image. (<b>e</b>) Two small debris (separated from each other) CCD images. (<b>g</b>) Two adjacent debris (1 big and 1 small) CCD image. (<b>i</b>) Three debris (separated from each other) CCD image. (<b>k</b>) Three adjacent debris CCD image. (<b>b</b>,<b>d</b>,<b>f</b>,<b>h</b>,<b>j</b>,<b>l</b>) Corresponding capacitance array inversion.</p>
Full article ">
Back to TopTop