Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (44)

Search Parameters:
Keywords = diffusion model (DM)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 1117 KiB  
Article
CycleDiffusion: Voice Conversion Using Cycle-Consistent Diffusion Models
by Dongsuk Yook, Geonhee Han, Hyung-Pil Chang and In-Chul Yoo
Appl. Sci. 2024, 14(20), 9595; https://doi.org/10.3390/app14209595 - 21 Oct 2024
Viewed by 812
Abstract
Voice conversion (VC) refers to the technique of modifying one speaker’s voice to mimic another’s while retaining the original linguistic content. This technology finds its applications in fields such as speech synthesis, accent modification, medicine, security, privacy, and entertainment. Among the various deep [...] Read more.
Voice conversion (VC) refers to the technique of modifying one speaker’s voice to mimic another’s while retaining the original linguistic content. This technology finds its applications in fields such as speech synthesis, accent modification, medicine, security, privacy, and entertainment. Among the various deep generative models used for voice conversion, including variational autoencoders (VAEs) and generative adversarial networks (GANs), diffusion models (DMs) have recently gained attention as promising methods due to their training stability and strong performance in data generation. Nevertheless, traditional DMs focus mainly on learning reconstruction paths like VAEs, rather than conversion paths as GANs do, thereby restricting the quality of the converted speech. To overcome this limitation and enhance voice conversion performance, we propose a cycle-consistent diffusion (CycleDiffusion) model, which comprises two DMs: one for converting the source speaker’s voice to the target speaker’s voice and the other for converting it back to the source speaker’s voice. By employing two DMs and enforcing a cycle consistency loss, the CycleDiffusion model effectively learns both reconstruction and conversion paths, producing high-quality converted speech. The effectiveness of the proposed model in voice conversion is validated through experiments using the VCTK (Voice Cloning Toolkit) dataset. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

Figure 1
<p>The proposed CycleDiffusion model for a two-speaker case. The solid lines depict the conventional reconstruction path training using <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="script">L</mi> </mrow> <mrow> <mi mathvariant="normal">d</mi> <mi mathvariant="normal">i</mi> <mi mathvariant="normal">f</mi> <mi mathvariant="normal">f</mi> <mi mathvariant="normal">u</mi> <mi mathvariant="normal">s</mi> <mi mathvariant="normal">i</mi> <mi mathvariant="normal">o</mi> <mi mathvariant="normal">n</mi> </mrow> </msub> </mrow> </semantics></math>, while the dotted and dashed lines represent the novel conversion path training using <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="script">L</mi> </mrow> <mrow> <mi mathvariant="normal">c</mi> <mi mathvariant="normal">y</mi> <mi mathvariant="normal">c</mi> <mi mathvariant="normal">l</mi> <mi mathvariant="normal">e</mi> </mrow> </msub> </mrow> </semantics></math>. The proposed method leverages both <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="script">L</mi> </mrow> <mrow> <mi mathvariant="normal">d</mi> <mi mathvariant="normal">i</mi> <mi mathvariant="normal">f</mi> <mi mathvariant="normal">f</mi> <mi mathvariant="normal">u</mi> <mi mathvariant="normal">s</mi> <mi mathvariant="normal">i</mi> <mi mathvariant="normal">o</mi> <mi mathvariant="normal">n</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="script">L</mi> </mrow> <mrow> <mi mathvariant="normal">c</mi> <mi mathvariant="normal">y</mi> <mi mathvariant="normal">c</mi> <mi mathvariant="normal">l</mi> <mi mathvariant="normal">e</mi> </mrow> </msub> </mrow> </semantics></math>. The dotted lines indicate inference paths (i.e., voice conversion), which are required for cycle consistency training as well. Specifically, the blue dotted lines indicate the voice conversion from speaker <math display="inline"><semantics> <mrow> <mi>ζ</mi> </mrow> </semantics></math> to speaker <math display="inline"><semantics> <mrow> <mi>ξ</mi> </mrow> </semantics></math>, whereas the blue dashed lines indicate the voice conversion from speaker <math display="inline"><semantics> <mrow> <mi>ξ</mi> </mrow> </semantics></math> to speaker <math display="inline"><semantics> <mrow> <mi>ζ</mi> </mrow> </semantics></math>. The red lines show the opposite direction of the blue lines. During training parameters via gradient descent of the cycle consistency loss, the parameters associated with the dotted lines are frozen, and those involved in the dashed lines are adjusted.</p>
Full article ">Figure 2
<p>Sample spectrograms of the converted utterances using DiffVC and CycleDiffusion: (<b>a</b>) an example where the MOS score of the utterance processed by CycleDiffusion is higher than that of DiffVC; (<b>b</b>) an example where the MOS score of the utterance processed by CycleDiffusion is lower than that of DiffVC.</p>
Full article ">
25 pages, 4317 KiB  
Article
Spatial Downscaling of Sea Surface Temperature Using Diffusion Model
by Shuo Wang, Xiaoyan Li, Xueming Zhu, Jiandong Li and Shaojing Guo
Remote Sens. 2024, 16(20), 3843; https://doi.org/10.3390/rs16203843 - 16 Oct 2024
Viewed by 555
Abstract
In recent years, advancements in high-resolution digital twin platforms or artificial intelligence marine forecasting have led to the increased requirements of high-resolution oceanic data. However, existing sea surface temperature (SST) products from observations often fail to meet researchers’ resolution requirements. Deep learning models [...] Read more.
In recent years, advancements in high-resolution digital twin platforms or artificial intelligence marine forecasting have led to the increased requirements of high-resolution oceanic data. However, existing sea surface temperature (SST) products from observations often fail to meet researchers’ resolution requirements. Deep learning models serve as practical techniques for improving the spatial resolution of SST data. In particular, diffusion models (DMs) have attracted widespread attention due to their ability to generate more vivid and realistic results than other neural networks. Despite DMs’ potential, their application in SST spatial downscaling remains largely unexplored. Hence we propose a novel DM-based spatial downscaling model, called DIFFDS, designed to obtain a high-resolution version of the input SST and to restore most of the meso scale processes. Experimental results indicate that DIFFDS is more effective and accurate than baseline neural networks, its downscaled high-resolution SST data are also visually comparable to the ground truth. The DIFFDS achieves an average root-mean-square error of 0.1074 °C and a peak signal-to-noise ratio of 50.48 dB in the 4× scale downscaling task, which shows its accuracy. Full article
(This article belongs to the Special Issue Artificial Intelligence for Ocean Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>The study area used in this paper.</p>
Full article ">Figure 2
<p>The forward and reverse diffusion processes of diffusion model, where <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>(</mo> <msub> <mi>x</mi> <mi>t</mi> </msub> <mo>|</mo> <msub> <mi>x</mi> <mrow> <mi>t</mi> <mo>−</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> </semantics></math> means the forward process that transforms distribution <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>t</mi> <mo>−</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> </semantics></math> to <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>(</mo> <msub> <mi>x</mi> <mi>t</mi> </msub> <mo>)</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>p</mi> <mi>θ</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>t</mi> <mo>−</mo> <mn>1</mn> </mrow> </msub> <mo>|</mo> <msub> <mi>x</mi> <mi>t</mi> </msub> <mo>)</mo> </mrow> </mrow> </semantics></math> represents the reverse process that transforms distribution <math display="inline"><semantics> <mrow> <msub> <mi>p</mi> <mi>θ</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>t</mi> </msub> <mo>)</mo> </mrow> </mrow> </semantics></math> to <math display="inline"><semantics> <mrow> <msub> <mi>p</mi> <mi>θ</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>t</mi> <mo>−</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>(<b>a</b>) illustrates the first training stage, detailing the training processes of CPEN and DIRformer. (<b>b</b>) depicts the second training stage, which is also the forward process of DDPM. The 3rd stage (<b>c</b>) shows the inference process of DIFFDS.</p>
Full article ">Figure 4
<p>Architecture details of DIFFDS. (<b>a</b>) CPEN, (<b>b</b>) Denoising network, (<b>c</b>) DIRformer, (<b>d</b>) Transformer block.</p>
Full article ">Figure 5
<p>The maximum, minimum, median, both upper and lower quartiles of each metric, (<b>a</b>) RMSE, (<b>b</b>) MAE, (<b>c</b>) PSNR, (<b>d</b>) Bias, for each method. (<b>e</b>) is the TCC plot for each point in the experiment sea area.</p>
Full article ">Figure 6
<p>The time series variations for each metric, (<b>a</b>) RMSE, (<b>b</b>) MAE, (<b>c</b>) PSNR, (<b>d</b>) Bias, from March 2021 to February 2022. The text highlighted in blue marks the date of the dotted line. The results of these two dates will be used in the discussion section.</p>
Full article ">Figure 7
<p>(<b>a</b>–<b>f</b>) The density scatter plots of each model.</p>
Full article ">Figure 8
<p>The SST distribution on 29 June 2021 of each model, these eight subplots individually display the high-resolution SST, low-resolution SST, and the downscaled results of each model along with their RMSE (°C).</p>
Full article ">Figure 9
<p>(<b>a</b>–<b>f</b>) The absolute Bias map between ground truth and each deep learning model on 29 June 2021. The red box displays the intense Bias area.</p>
Full article ">Figure 10
<p>The results on 29 June 2021 zoomed from 9°N–17°N and 108°E–116°E. The red box and black box areas display erroneous SST contents.</p>
Full article ">Figure 11
<p>The SST distribution on 1 August 2021 of each model, these eight subplots individually display the high-resolution SSTs, low-resolution SSTs, and the downscaled results of each model along with their RMSE (°C).</p>
Full article ">Figure 12
<p>(<b>a</b>–<b>f</b>) The absolute Bias map between ground truth and each deep learning model on 1 August 2021.</p>
Full article ">Figure 13
<p>The results on 1 August 2021 zoomed from 10°N–18°N to 107°E–115°E. The black box area displays complex SST contents.</p>
Full article ">Figure 14
<p>Variations in the low, ground truth, DIFFIR, and DIFFDS SST data over a continuous five-day period. Red isotherms are used to highlight the boundary of the upwelling currents.</p>
Full article ">Figure 15
<p>The spatial distribution of RMSE for DIFFIR and DIFFDS on the test set. The red boxed area highlights the significant difference between DIFFIR and DIFFDS.</p>
Full article ">
20 pages, 7209 KiB  
Article
DM–AHR: A Self-Supervised Conditional Diffusion Model for AI-Generated Hairless Imaging for Enhanced Skin Diagnosis Applications
by Bilel Benjdira, Anas M. Ali, Anis Koubaa, Adel Ammar and Wadii Boulila
Cancers 2024, 16(17), 2947; https://doi.org/10.3390/cancers16172947 - 23 Aug 2024
Viewed by 858
Abstract
Accurate skin diagnosis through end-user applications is important for early detection and cure of severe skin diseases. However, the low quality of dermoscopic images hampers this mission, especially with the presence of hair on these kinds of images. This paper introduces DM–AHR, [...] Read more.
Accurate skin diagnosis through end-user applications is important for early detection and cure of severe skin diseases. However, the low quality of dermoscopic images hampers this mission, especially with the presence of hair on these kinds of images. This paper introduces DM–AHR, a novel, self-supervised conditional diffusion model designed specifically for the automatic generation of hairless dermoscopic images to improve the quality of skin diagnosis applications. The current research contributes in three significant ways to the field of dermatologic imaging. First, we develop a customized diffusion model that adeptly differentiates between hair and skin features. Second, we pioneer a novel self-supervised learning strategy that is specifically tailored to optimize performance for hairless imaging. Third, we introduce a new dataset, named DERMAHAIR (DERMatologic Automatic HAIR Removal Dataset), that is designed to advance and benchmark research in this specialized domain. These contributions significantly enhance the clarity of dermoscopic images, improving the accuracy of skin diagnosis procedures. We elaborate on the architecture of DM–AHR and demonstrate its effective performance in removing hair while preserving critical details of skin lesions. Our results show an enhancement in the accuracy of skin lesion analysis when compared to existing techniques. Given its robust performance, DM–AHR holds considerable promise for broader application in medical image enhancement. Full article
(This article belongs to the Section Methods and Technologies Development)
Show Figures

Figure 1

Figure 1
<p>Illustration of the <span class="html-italic">DM–AHR</span> Diffusion process.</p>
Full article ">Figure 2
<p>Inference steps of <span class="html-italic">DM–AHR</span> applied on one melanoma skin case performed in 10 iterations (<math display="inline"><semantics> <mrow> <mi>T</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>).</p>
Full article ">Figure 3
<p>Generated samples using the customized self-supervised technique.</p>
Full article ">Figure 4
<p>Performance of the skin classification using Swin Transformer before using <span class="html-italic">DM–AHR</span> (hairy occluded images): (<b>a</b>) Confusion matrix, (<b>b</b>) accuracy curve during training, (<b>c</b>) ROC curve, and (<b>d</b>) precision–recall curve.</p>
Full article ">Figure 5
<p>Performance of the skin classification using Swin Transformer after using <span class="html-italic">DM–AHR</span> (dehaired images): (<b>a</b>) Confusion matrix, (<b>b</b>) accuracy curve during training, (<b>c</b>) ROC curve, (<b>d</b>) precision–recall curve.</p>
Full article ">
24 pages, 1377 KiB  
Review
A Survey on Surface Defect Inspection Based on Generative Models in Manufacturing
by Yu He, Shuai Li, Xin Wen and Jing Xu
Appl. Sci. 2024, 14(15), 6774; https://doi.org/10.3390/app14156774 - 2 Aug 2024
Viewed by 981
Abstract
Surface defect inspection based on deep learning has demonstrated outstanding performance in improving detection accuracy and model generalization. However, the small scale of defect datasets always limits the application of deep models in industry. Generative models can obtain realistic samples in a very [...] Read more.
Surface defect inspection based on deep learning has demonstrated outstanding performance in improving detection accuracy and model generalization. However, the small scale of defect datasets always limits the application of deep models in industry. Generative models can obtain realistic samples in a very cheap way, which can effectively solve this problem and thus has received widespread attention in recent years. This paper provides a comprehensive analysis and summary of the current studies of surface defect inspection methods proposed between 2022 and 2024. First, according to the use of generative models, these methods are classified into four categories: Variational Auto-Encoders (VAEs), Generative Adversarial Networks (GANs), Diffusion Models (DMs), and multi-models. Second, the research status of surface defect inspection based on generative models in recent years is discussed from four aspects: sample generation, detection objective, inspection task, and learning model. Then, the public datasets and evaluation metrics that are commonly used for surface defect inspection are discussed, and a comparative evaluation of defect inspection methods based on generative models is provided. Finally, this study discusses the existing challenges for the defect inspection methods based on generative models, providing insights for future research. Full article
(This article belongs to the Special Issue Deep Learning for Image Recognition and Processing)
Show Figures

Figure 1

Figure 1
<p>Examples of surface defect images. Upper row: examples of defects on rigid surfaces (<a href="https://pan.baidu.com/s/1l_RjTP7aTwr57ahcwelTpA" target="_blank">https://pan.baidu.com/s/1l_RjTP7aTwr57ahcwelTpA</a>, accessed on 25 July 2024). Bottom row: examples of defects on flexible surfaces (<a href="https://ibug.doc.ic.ac.uk/resources/fabrics/" target="_blank">https://ibug.doc.ic.ac.uk/resources/fabrics/</a>, accessed on 25 July 2024).</p>
Full article ">Figure 2
<p>Comparison of major generative models used in defect inspection.</p>
Full article ">
18 pages, 22790 KiB  
Article
Universal Image Restoration with Text Prompt Diffusion
by Bing Yu, Zhenghui Fan, Xue Xiang, Jiahui Chen and Dongjin Huang
Sensors 2024, 24(12), 3917; https://doi.org/10.3390/s24123917 - 17 Jun 2024
Viewed by 1009
Abstract
Universal image restoration (UIR) aims to accurately restore images with a variety of unknown degradation types and levels. Existing methods, including both learning-based and prior-based approaches, heavily rely on low-quality image features. However, it is challenging to extract degradation information from diverse low-quality [...] Read more.
Universal image restoration (UIR) aims to accurately restore images with a variety of unknown degradation types and levels. Existing methods, including both learning-based and prior-based approaches, heavily rely on low-quality image features. However, it is challenging to extract degradation information from diverse low-quality images, which limits model performance. Furthermore, UIR necessitates the recovery of images with diverse and complex types of degradation. Inaccurate estimations further decrease restoration performance, resulting in suboptimal recovery outcomes. To enhance UIR performance, a viable approach is to introduce additional priors. The current UIR methods have problems such as poor enhancement effect and low universality. To address this issue, we propose an effective framework based on a diffusion model (DM) for universal image restoration, dubbed ETDiffIR. Inspired by the remarkable performance of text prompts in the field of image generation, we employ text prompts to improve the restoration of degraded images. This framework utilizes a text prompt corresponding to the low-quality image to assist the diffusion model in restoring the image. Specifically, a novel text–image fusion block is proposed by combining the CLIP text encoder and the DA-CLIP image controller, which integrates text prompt encoding and degradation type encoding into time step encoding. Moreover, to reduce the computational cost of the denoising UNet in the diffusion model, we develop an efficient restoration U-shaped network (ERUNet) to achieve favorable noise prediction performance via depthwise convolution and pointwise convolution. We evaluate the proposed method on image dehazing, deraining, and denoising tasks. The experimental results indicate the superiority of our proposed algorithm. Full article
(This article belongs to the Special Issue Intelligent Sensing and Artificial Intelligence for Image Processing)
Show Figures

Figure 1

Figure 1
<p>An overview of the forward diffusion process and the reverse diffusion process using mean-reverting stochastic differential equations. The forward diffusion process simulates the degradation of an HQ image <math display="inline"><semantics> <msub> <mi>x</mi> <mn>0</mn> </msub> </semantics></math> into an LQ image <math display="inline"><semantics> <mi>μ</mi> </semantics></math> via diffusion <math display="inline"><semantics> <msub> <mi>x</mi> <mn>0</mn> </msub> </semantics></math> towards <math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>+</mo> <mi>ϵ</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>The overall architecture of our proposed ETDiffIR. It comprises a text–image fusion block (TIFB) and ERUNet for noise prediction. The TIFB incorporates a pretrained CLIP text encoder and a pretrained DA-CLIP image controller, with their weights frozen during training.</p>
Full article ">Figure 3
<p>Illustration of the efficient restoration U-shaped network (ERUNet).</p>
Full article ">Figure 4
<p>An example of image–text pair generation using Minigpt-4.</p>
Full article ">Figure 5
<p>Dehazing comparisons for universal methods on images from the SOTS dataset [<a href="#B51-sensors-24-03917" class="html-bibr">51</a>]. The proposed model better preserves image details.</p>
Full article ">Figure 6
<p>Image deraining comparisons for universal methods on images from the Rain100L dataset [<a href="#B57-sensors-24-03917" class="html-bibr">57</a>]. The proposed method effectively removes rain streaks to obtain rain-free images.</p>
Full article ">Figure 7
<p>Image denoising comparisons for universal methods on images from the CBSD68 dataset [<a href="#B49-sensors-24-03917" class="html-bibr">49</a>].</p>
Full article ">Figure 8
<p>Visualization results of ablation experiments on the effectiveness of the proposed TIFB and ERB.</p>
Full article ">Figure 9
<p>Training curves of model variations, demonstrating the effectiveness of our TIFBs. (<b>a</b>) image denoising, (<b>b</b>) image deraining, (<b>c</b>) image dehazing.</p>
Full article ">Figure 10
<p>Visual comparison of different textual prompts.</p>
Full article ">
18 pages, 5383 KiB  
Article
Reliable Out-of-Distribution Recognition of Synthetic Images
by Anatol Maier and Christian Riess
J. Imaging 2024, 10(5), 110; https://doi.org/10.3390/jimaging10050110 - 1 May 2024
Viewed by 1547
Abstract
Generative adversarial networks (GANs) and diffusion models (DMs) have revolutionized the creation of synthetically generated but realistic-looking images. Distinguishing such generated images from real camera captures is one of the key tasks in current multimedia forensics research. One particular challenge is the generalization [...] Read more.
Generative adversarial networks (GANs) and diffusion models (DMs) have revolutionized the creation of synthetically generated but realistic-looking images. Distinguishing such generated images from real camera captures is one of the key tasks in current multimedia forensics research. One particular challenge is the generalization to unseen generators or post-processing. This can be viewed as an issue of handling out-of-distribution inputs. Forensic detectors can be hardened by the extensive augmentation of the training data or specifically tailored networks. Nevertheless, such precautions only manage but do not remove the risk of prediction failures on inputs that look reasonable to an analyst but in fact are out of the training distribution of the network. With this work, we aim to close this gap with a Bayesian Neural Network (BNN) that provides an additional uncertainty measure to warn an analyst of difficult decisions. More specifically, the BNN learns the task at hand and also detects potential confusion between post-processing and image generator artifacts. Our experiments show that the BNN achieves on-par performance with the state-of-the-art detectors while producing more reliable predictions on out-of-distribution examples. Full article
(This article belongs to the Special Issue Robust Deep Learning Techniques for Multimedia Forensics and Security)
Show Figures

Figure 1

Figure 1
<p>Architecture of the Bayesian Neural Network. The four wavelet sub-bands are used as separate inputs regarding a sequence of three convolutional layers followed by two fully connected layers used as a separate input, and target classes are “real”, “synthetic”, and “compressed”.</p>
Full article ">Figure 2
<p>Detection of out-of-distribution examples of the BNN and CNN on the UFDD and LSUN datasets. Left: ROC curves of the BNN model. Right: ROC curves of the CNN.</p>
Full article ">Figure 3
<p>(<b>Left</b>) image generated by the EG3D model (out-of-distribution). (<b>Middle</b>) class activations for the uncompressed image. The BNN correctly shows a high activation for the synthetic class and a high uncertainty. (<b>Right</b>) class activations for the images after JPEG compression with quality factor <math display="inline"><semantics> <mrow> <mi>Q</mi> <mo>=</mo> <mn>90</mn> </mrow> </semantics></math>. Here, the model becomes highly uncertain about its decision and abstains from a prediction. However, it can be observed that now the most likely classes are real alongside compressed.</p>
Full article ">Figure 4
<p>Activation heatmap of the BNN for a sample image from the EG3D model. Each column shows the activation for the corresponding class. In the top row are shown the respective activations in the uncompressed case. Here, for each class, different image regions are dominant, with some overlap between the real and compressed classes. The bottom row shows the respective class activations for the JPEG-compressed case. Here, the activation for the synthetic class becomes less dominant. Additionally, the activations for the compressed and real class share mostly the same regions, which is a telltale sign of unreliable post-processing confusion.</p>
Full article ">Figure 5
<p>Mean model prediction on the TrueFace dataset [<a href="#B59-jimaging-10-00110" class="html-bibr">59</a>] for real and synthetic data prior to and after uploading to Facebook, Telegram, Twitter, and Whatsapp. Synthetic images were generated by the StyleGAN architecture and for each evaluation we used 100 images from the test set. While images prior to the respective platform upload are on average correctly classified, our model shows highly increased uncertainty and abstains from predictions, rendering the post-social predictions unreliable. One notable exception includes synthetic images uploaded to Twitter, which are falsely classified as real images with high confidence. However, these false predictions can be detected by our SSIM-based threshold as these highly overlap with compression artifacts.</p>
Full article ">Figure 6
<p>(<b>Left</b>) error rate as a function of the uncertainty threshold <math display="inline"><semantics> <msub> <mi>σ</mi> <mi>abstain</mi> </msub> </semantics></math>. (<b>Right</b>) abstain rate as a function of the uncertainty threshold <math display="inline"><semantics> <msub> <mi>σ</mi> <mi>abstain</mi> </msub> </semantics></math>. Choosing a more conservative <math display="inline"><semantics> <msub> <mi>σ</mi> <mi>abstain</mi> </msub> </semantics></math>, the error rate is significantly reduced. However, the abstain rate on the other hand is increasing as more predictions are marked as unreliable. In both figures, the dotted line shows the <math display="inline"><semantics> <msub> <mi>σ</mi> <mi>abstain</mi> </msub> </semantics></math> threshold as chosen for our previous evaluations. For most cases, a threshold of twice the in-distribution model uncertainty shows a reasonable tradeoff between the error rate and abstain rate.</p>
Full article ">Figure 7
<p>Comparison of BNN error rate on different OOD datasets with and without noise contrastive estimation (lower is better).</p>
Full article ">
8 pages, 588 KiB  
Brief Report
Constraining the Inner Galactic DM Density Profile with H.E.S.S.
by Jaume Zuriaga-Puig
Astronomy 2024, 3(2), 114-121; https://doi.org/10.3390/astronomy3020008 - 11 Apr 2024
Viewed by 1001
Abstract
In this short review, corresponding to a talk given at the conference “Cosmology 2023 in Miramare”, we combine an analysis of five regions observed by H.E.S.S. in the Galactic Center, intending to constrain the Dark Matter (DM) density profile in a WIMP annihilation [...] Read more.
In this short review, corresponding to a talk given at the conference “Cosmology 2023 in Miramare”, we combine an analysis of five regions observed by H.E.S.S. in the Galactic Center, intending to constrain the Dark Matter (DM) density profile in a WIMP annihilation scenario. For the analysis, we include the state-of-the-art Galactic diffuse emission Gamma-optimized model computed with DRAGON and a wide range of DM density profiles from cored to cuspy profiles, including different kinds of DM spikes. Our results are able to constrain generalized NFW profiles with an inner slope γ1.3. When considering DM spikes, the adiabatic spike is completely ruled out. However, smoother spikes given by the interactions with the bulge stars are compatible if γ0.8, with an internal slope of γsp-stars=1.5. Full article
(This article belongs to the Special Issue Current Trends in Cosmology)
Show Figures

Figure 1

Figure 1
<p>Five regions of interest considered. <b>Left</b> panel: VIR (in green), <math display="inline"><semantics> <mrow> <mi>θ</mi> <mo>&lt;</mo> <mn>0.1</mn> <mo>°</mo> </mrow> </semantics></math> (r ≲ 15 pc); Ridge (in gray), <math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> <mi>b</mi> <mo>|</mo> <mo>&lt;</mo> <mn>0.3</mn> <mo>°</mo> </mrow> </mrow> </semantics></math> (43 pc) and <math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> <mi>l</mi> <mo>|</mo> <mo>&lt;</mo> <mn>1.0</mn> <mo>°</mo> </mrow> </mrow> </semantics></math> (145 pc), with some masks applied; Diffuse Region (blue), <math display="inline"><semantics> <mrow> <mn>0.15</mn> <mo>°</mo> <mo>&lt;</mo> <mi>θ</mi> <mo>&lt;</mo> <mn>0.45</mn> <mo>°</mo> </mrow> </semantics></math> (22 pc ≲ r ≲ 65 pc); Halo (red), <math display="inline"><semantics> <mrow> <mn>0.3</mn> <mo>°</mo> <mo>&lt;</mo> <mi>θ</mi> <mo>&lt;</mo> <mn>1.0</mn> <mo>°</mo> </mrow> </semantics></math> (43 pc ≲ r ≲ 145 pc), excluding the latitudes <math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> <mi>b</mi> <mo>|</mo> <mo>&lt;</mo> <mn>0.3</mn> <mo>°</mo> </mrow> </mrow> </semantics></math> (the Galactic plane). <b>Right</b> panel: IGS (orange), <math display="inline"><semantics> <mrow> <mn>0.5</mn> <mo>°</mo> <mo>&lt;</mo> <mi>θ</mi> <mo>&lt;</mo> <mn>3.0</mn> <mo>°</mo> </mrow> </semantics></math> (72 pc ≲ r ≲ 434 pc), excluding the Galactic plane and several sources (light grey).</p>
Full article ">Figure 2
<p>Comparison of the J-factor <math display="inline"><semantics> <msub> <mrow> <mo>〈</mo> <mi>J</mi> <mo>〉</mo> </mrow> <mrow> <mo>Δ</mo> <mo>Ω</mo> </mrow> </msub> </semantics></math> between the different DM models (first row for generalized NFW, second for adiabatic spike, and star spike in the third one). The fit values and upper limits come from the gamma-ray spectra observed by H.E.S.S. in the regions defined in <a href="#astronomy-03-00008-f001" class="html-fig">Figure 1</a>, assuming the thermal relic cross-section <math display="inline"><semantics> <mrow> <mrow> <mo>〈</mo> <mi>σ</mi> <mi>v</mi> <mo>〉</mo> </mrow> <mo>≃</mo> <mn>2.2</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>26</mn> </mrow> </msup> <msup> <mi>cm</mi> <mn>3</mn> </msup> <msup> <mi mathvariant="normal">s</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math>. We show in grey the uncertainties of the fit values (1<math display="inline"><semantics> <mi>σ</mi> </semantics></math> and 2<math display="inline"><semantics> <mi>σ</mi> </semantics></math> for VIR and Ridge, 2<math display="inline"><semantics> <mi>σ</mi> </semantics></math> for Diffuse, and 2<math display="inline"><semantics> <mi>σ</mi> </semantics></math> upper limits for Halo). See the text for more details.</p>
Full article ">Figure 2 Cont.
<p>Comparison of the J-factor <math display="inline"><semantics> <msub> <mrow> <mo>〈</mo> <mi>J</mi> <mo>〉</mo> </mrow> <mrow> <mo>Δ</mo> <mo>Ω</mo> </mrow> </msub> </semantics></math> between the different DM models (first row for generalized NFW, second for adiabatic spike, and star spike in the third one). The fit values and upper limits come from the gamma-ray spectra observed by H.E.S.S. in the regions defined in <a href="#astronomy-03-00008-f001" class="html-fig">Figure 1</a>, assuming the thermal relic cross-section <math display="inline"><semantics> <mrow> <mrow> <mo>〈</mo> <mi>σ</mi> <mi>v</mi> <mo>〉</mo> </mrow> <mo>≃</mo> <mn>2.2</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>26</mn> </mrow> </msup> <msup> <mi>cm</mi> <mn>3</mn> </msup> <msup> <mi mathvariant="normal">s</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math>. We show in grey the uncertainties of the fit values (1<math display="inline"><semantics> <mi>σ</mi> </semantics></math> and 2<math display="inline"><semantics> <mi>σ</mi> </semantics></math> for VIR and Ridge, 2<math display="inline"><semantics> <mi>σ</mi> </semantics></math> for Diffuse, and 2<math display="inline"><semantics> <mi>σ</mi> </semantics></math> upper limits for Halo). See the text for more details.</p>
Full article ">
21 pages, 6228 KiB  
Article
An Improved SAR Ship Classification Method Using Text-to-Image Generation-Based Data Augmentation and Squeeze and Excitation
by Lu Wang, Yuhang Qi, P. Takis Mathiopoulos, Chunhui Zhao and Suleman Mazhar
Remote Sens. 2024, 16(7), 1299; https://doi.org/10.3390/rs16071299 - 7 Apr 2024
Cited by 3 | Viewed by 1650
Abstract
Synthetic aperture radar (SAR) plays a crucial role in maritime surveillance due to its capability for all-weather, all-day operation. However, SAR ship recognition faces challenges, primarily due to the imbalance and inadequacy of ship samples in publicly available datasets, along with the presence [...] Read more.
Synthetic aperture radar (SAR) plays a crucial role in maritime surveillance due to its capability for all-weather, all-day operation. However, SAR ship recognition faces challenges, primarily due to the imbalance and inadequacy of ship samples in publicly available datasets, along with the presence of numerous outliers. To address these issues, this paper proposes a SAR ship classification method based on text-generated images to tackle dataset imbalance. Firstly, an image generation module is introduced to augment SAR ship data. This method generates images from textual descriptions to overcome the problem of insufficient samples and the imbalance between ship categories. Secondly, given the limited information content in the black background of SAR ship images, the Tokens-to-Token Vision Transformer (T2T-ViT) is employed as the backbone network. This approach effectively combines local information on the basis of global modeling, facilitating the extraction of features from SAR images. Finally, a Squeeze-and-Excitation (SE) model is incorporated into the backbone network to enhance the network’s focus on essential features, thereby improving the model’s generalization ability. To assess the model’s effectiveness, extensive experiments were conducted on the OpenSARShip2.0 and FUSAR-Ship datasets. The performance evaluation results indicate that the proposed method achieves higher classification accuracy in the context of imbalanced datasets compared to eight existing methods. Full article
(This article belongs to the Special Issue Deep Learning Techniques Applied in Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>A structural comparison diagram between CNN and ViT.</p>
Full article ">Figure 2
<p>The pre-training process of CLIP.</p>
Full article ">Figure 3
<p>The overall framework of the proposed method.</p>
Full article ">Figure 4
<p>The operation of the diffusion model.</p>
Full article ">Figure 5
<p>The structure of the T2T module.</p>
Full article ">Figure 6
<p>The structure of the SE module.</p>
Full article ">Figure 7
<p>SAR ship samples from the OpenSARShip2.0 dataset representing (<b>a</b>) an optical image of Cargo; (<b>b</b>) a SAR image of Cargo; (<b>c</b>) an optical image of Fishing; (<b>d</b>) a SAR image of Fishing; (<b>e</b>) an optical image of Tug; and (<b>f</b>) a SAR image of Tug.</p>
Full article ">Figure 8
<p>SAR ship samples in FUSAR-Ship dataset. Among these, (<b>a</b>–<b>e</b>) represent optical images of Bulk Carrier, Cargo, Fishing, Tanker, and Other, and (<b>f</b>–<b>j</b>) represent SAR images of Bulk Carrier, Cargo, Fishing, Tanker, and Other.</p>
Full article ">Figure 9
<p>Image results generated by inputting text information: SAR Cargo, SAR Fishing, SAR Tug; (<b>a</b>–<b>c</b>) represent Cargo images, while (<b>d</b>–<b>f</b>) represent Fishing images, and (<b>g</b>–<b>i</b>) represent Tug images.</p>
Full article ">
14 pages, 1743 KiB  
Article
Physical and Chemical Properties of Convective- and Microwave-Dried Blackberry Fruits Grown Using Organic Procedures
by Marko Petković, Nemanja Miletić, Valerija Pantelić, Vladimir Filipović, Biljana Lončar and Olga Mitrović
Foods 2024, 13(5), 791; https://doi.org/10.3390/foods13050791 - 4 Mar 2024
Viewed by 1341
Abstract
This study aimed to evaluate the effect of convective and microwave drying on the bioactive-compounds content of blackberry (Rubus fruticosus) fruits, as well as drying parameters and energy consumption. The fruit was dehydrated in a convective dehydrator at a temperature of [...] Read more.
This study aimed to evaluate the effect of convective and microwave drying on the bioactive-compounds content of blackberry (Rubus fruticosus) fruits, as well as drying parameters and energy consumption. The fruit was dehydrated in a convective dehydrator at a temperature of 50 °C and 70 °C and in a microwave oven at power levels of 90 W, 180 W and 240 W. The highest amount of anthocyanins, polyphenols and antioxidant capacity were obtained in blackberry fruits that were microwave dried at 90 W and 180 W (46.3–52.5 and 51.8–83.5 mg 100 g−1 dm of total anthocyanins, 296.3–255.8 and 418.4–502.2 mg 100 g−1 dm of total phenolics, and 1.20–1.51 and 1.45–2.35 mmol TE 100 g−1 dm of antioxidant capacity for 90 W and 180 W models, respectively). It turned out that microwave dehydration shortened the processing time and lowered the energy consumption compared to convective drying (a significantly reduced drying time of 92–99% with microwave dehydration). Blackberry fruits dehydrated at 240 W showed the shortest dehydration time (59–67 min), minimal energy consumption (0.23 kWh) and the most efficient diffusion (1.48–1.66 × 10−8 m2 s−1). Full article
(This article belongs to the Special Issue Food Drying Applications for Plant Products: A Comparative Analysis)
Show Figures

Figure 1

Figure 1
<p><span class="html-italic">MR</span> curves of convective drying (<b>upper</b> figure) and microwave drying (<b>lower</b> figure). LN and TC stand for Loch Ness and Triple Crown varieties.</p>
Full article ">Figure 2
<p><span class="html-italic">DR</span> curves of convective drying (<b>upper</b> figure) and microwave drying (<b>lower</b> figure).</p>
Full article ">Figure 3
<p>PCA of independent variables and responses of the convective and microwave drying.</p>
Full article ">Figure 4
<p>A color-correlation diagram depicting the relationship between the independent variable parameters and the responses of the convective and microwave drying.</p>
Full article ">
16 pages, 4543 KiB  
Article
Convective Hot Air Drying of Red Cabbage (Brassica oleracea var. Capitata Rubra): Mathematical Modeling, Energy Consumption and Microstructure
by Antonio Vega-Galvez, Luis S. Gomez-Perez, Kong Shun Ah-Hen, Francisca Zepeda, Purificación García-Segovia, Cristina Bilbao-Sainz, Nicol Mejías and Alexis Pasten
Processes 2024, 12(3), 509; https://doi.org/10.3390/pr12030509 - 29 Feb 2024
Viewed by 1200
Abstract
This study examined the convective drying of red cabbage at temperatures ranging from 50 to 90 °C. Mathematical modeling was used to describe isotherms, drying kinetics and rehydration process. The effects of drying conditions on energy consumption and microstructure were also evaluated. The [...] Read more.
This study examined the convective drying of red cabbage at temperatures ranging from 50 to 90 °C. Mathematical modeling was used to describe isotherms, drying kinetics and rehydration process. The effects of drying conditions on energy consumption and microstructure were also evaluated. The Halsey model had the best fit to the isotherm data and the equilibrium moisture was determined to be 0.0672, 0.0490, 0 0.0379, 0.0324 and 0.0279 g water/g d.m. at 50, 60, 70, 80 and 90 °C, respectively. Drying kinetics were described most accurately by the Midilli and Kuçuk model. Also, the diffusion coefficient values increased with drying temperature. Lower energy consumption was found for drying at 90 °C and the rehydration process was best described by the Weibull model. Samples dehydrated at 90 °C showed high water holding capacity and better maintenance of microstructure. These results could be used to foster a sustainable drying process for red cabbage. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>) Experimental isotherm curves at 50 and 70 °C with the Halsey model predictions. (<b>B</b>) Fit quality of desorption isotherm models.</p>
Full article ">Figure 2
<p>(<b>A</b>) Drying kinetics at five different process temperatures and the Midilli and Kucuk model fit. (<b>B</b>) Drying rate as a function of moisture content at different temperatures.</p>
Full article ">Figure 3
<p>Diffusion coefficients at different drying temperatures. On the bars, different letters (a, b, c, d and e) indicate significant differences as per Multiple Range Test (MRT) (<span class="html-italic">p</span> &lt; 0.05).</p>
Full article ">Figure 4
<p>Rehydration curves of red cabbage dehydrated at five different process temperatures.</p>
Full article ">Figure 5
<p>(<b>A</b>) Total water absorbed at equilibrium time; (<b>B</b>) Water holding capacity (%WHC). On the bars, different letters (a, b and c) indicate significant differences as per Multiple Range Test (MRT) (<span class="html-italic">p</span> &lt; 0.05).</p>
Full article ">Figure 6
<p>Microstructure of rehydrated red cabbage samples dried by convective drying at different process temperatures: (<b>a</b>) fresh; (<b>b</b>) 50 °C; (<b>c</b>) 60 °C; (<b>d</b>) 70 °C; (<b>e</b>) 80 °C and (<b>f</b>) 90 °C. ǀ―ǀ 20 µm.</p>
Full article ">
22 pages, 2194 KiB  
Review
A Contemporary Survey on Deepfake Detection: Datasets, Algorithms, and Challenges
by Liang Yu Gong and Xue Jun Li
Electronics 2024, 13(3), 585; https://doi.org/10.3390/electronics13030585 - 31 Jan 2024
Cited by 6 | Viewed by 10712
Abstract
Deepfakes are notorious for their unethical and malicious applications to achieve economic, political, and social reputation goals. Recent years have seen widespread facial forgery, which does not require technical skills. Since the development of generative adversarial networks (GANs) and diffusion models (DMs), deepfake [...] Read more.
Deepfakes are notorious for their unethical and malicious applications to achieve economic, political, and social reputation goals. Recent years have seen widespread facial forgery, which does not require technical skills. Since the development of generative adversarial networks (GANs) and diffusion models (DMs), deepfake generation has been moving toward better quality. Therefore, it is necessary to find an effective method to detect fake media. This contemporary survey provides a comprehensive overview of several typical facial forgery detection methods proposed from 2019 to 2023. We also analyze and group them into four categories in terms of their feature extraction methods and network architectures: traditional convolutional neural network (CNN)-based detection, CNN backbone with semi-supervised detection, transformer-based detection, and biological signal detection. Furthermore, it summarizes several representative deepfake detection datasets with their advantages and disadvantages. Finally, we evaluate the performance of these detection models with respect to different datasets by comparing their evaluating metrics. Across all experimental results on these state-of-the-art detection models, we find that the accuracy is largely degraded if we utilize cross-dataset evaluation. These results will provide a reference for further research to develop more reliable detection algorithms. Full article
Show Figures

Figure 1

Figure 1
<p>Relative publication data obtained from Dimensions database [<a href="#B9-electronics-13-00585" class="html-bibr">9</a>] at the end of 2023 by searching “deepfake generation” and “deepfake detection” as keywords: (<b>a</b>) number of deepfake-generation-related scholarly papers from 2014 to 2023; (<b>b</b>) number of deepfake-detection-related papers from 2014 to 2023.</p>
Full article ">Figure 2
<p>Several FaceForensics++ samples. The manipulated methods are DeepFakes (Row 1), Face2Face (Row 2), FaceSwap (Row 3), and Neural Textures (Row 4). DeepFakes and FaceSwap methods usually create low-quality manipulated facial sequences with color, landmark, and boundary mismatch. Face2Face and Neural Textures methods can output slightly better-quality manipulated sequences but with different resolutions.</p>
Full article ">Figure 3
<p>DFDC samples. We manually utilized InsightFace facial detection model to extract human faces from the DFDC. Although some of the samples are without color blending and with obvious facial boundaries, the average quality is a little higher than the first-generation deepfake datasets.</p>
Full article ">Figure 4
<p>Celeb-DF V2 crop manipulated facial frames. Except for transgender and transracial fake samples (Row 3), it is hard to distinguish real and fake images with the human eye.</p>
Full article ">Figure 5
<p>The architecture of CapsuleNet faces forensics detection. This method utilizes the CNN backbone to extract features and Capsule Network to output vectors for prediction.</p>
Full article ">Figure 6
<p>Details of the primary Capsule Network structure with relative parameters. Each Capsule Network includes the parallel connection of 10 primary capsules.</p>
Full article ">Figure 7
<p>The architecture of CORE. One method extracts pairs of representations by data augmentation and calculates consistency loss to guide final loss function.</p>
Full article ">Figure 8
<p>The architecture of the consistency branch and classification branch [<a href="#B32-electronics-13-00585" class="html-bibr">32</a>], One method calculates the patch similarity and classification loss to guide the final loss.</p>
Full article ">Figure 9
<p>An overview of DCL architecture. Reprinted with permission from Ref. [<a href="#B36-electronics-13-00585" class="html-bibr">36</a>]. 2024, K.Sun. Four random data augmentation factors were utilized in this contrastive learning method to guide inter-class and intra-class distances separately.</p>
Full article ">Figure 10
<p>The structure of STIL [<a href="#B40-electronics-13-00585" class="html-bibr">40</a>]. Each STIL block contains SIM, TIM, and ISM modules.</p>
Full article ">Figure 11
<p>An overview of DFDT framework for deepfake detection [<a href="#B44-electronics-13-00585" class="html-bibr">44</a>]. It includes overlapping patch embedding, patch selection mechanism, multi-scale transformer block, and classifier.</p>
Full article ">Figure 12
<p>An architecture of ISTVT [<a href="#B46-electronics-13-00585" class="html-bibr">46</a>]. It consists of four basic components: backbone feature extraction, token embedding, self-attention blocks, and MLP.</p>
Full article ">Figure 13
<p>Two approaches combining visual representations with physiological signals Reprinted with permission from Ref. [<a href="#B50-electronics-13-00585" class="html-bibr">50</a>], 2024, Stefanov, K.</p>
Full article ">Figure 14
<p>Some fake samples with obvious forgery clues are wrongly predicted as “Real”.</p>
Full article ">
23 pages, 4823 KiB  
Article
Multi-Scale Reconstruction of Turbulent Rotating Flows with Generative Diffusion Models
by Tianyi Li, Alessandra S. Lanotte, Michele Buzzicotti, Fabio Bonaccorso and Luca Biferale
Atmosphere 2024, 15(1), 60; https://doi.org/10.3390/atmos15010060 - 31 Dec 2023
Cited by 2 | Viewed by 1952
Abstract
We address the problem of data augmentation in a rotating turbulence set-up, a paradigmatic challenge in geophysical applications. The goal is to reconstruct information in two-dimensional (2D) cuts of the three-dimensional flow fields, imagining spatial gaps present within each 2D observed slice. We [...] Read more.
We address the problem of data augmentation in a rotating turbulence set-up, a paradigmatic challenge in geophysical applications. The goal is to reconstruct information in two-dimensional (2D) cuts of the three-dimensional flow fields, imagining spatial gaps present within each 2D observed slice. We evaluate the effectiveness of different data-driven tools, based on diffusion models (DMs), a state-of-the-art generative machine learning protocol, and generative adversarial networks (GANs), previously considered as the best-performing method both in terms of point-wise reconstruction and the statistical properties of the inferred velocity fields. We focus on two different DMs recently proposed in the specialized literature: (i) RePaint, based on a heuristic strategy to guide an unconditional DM for flow generation by using partial measurements data, and (ii) Palette, a conditional DM trained for the reconstruction task with paired measured and missing data. Systematic comparison shows that (i) DMs outperform the GAN in terms of the mean squared error and/or the statistical accuracy; (ii) Palette DM emerges as the most promising tool in terms of both point-wise and statistical metrics. An important property of DMs is their capacity for probabilistic reconstructions, providing a range of predictions based on the same measurements, enabling uncertainty quantification and risk assessment. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Visualization of the velocity magnitude from a three-dimensional (3D) snapshot extracted from our numerical simulations. The two velocity planes (in the <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math>-<math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math> directions) at the top and bottom of the integration domain show the velocity magnitude. In the 3D volume, we visualize a rendering of the small-scale velocity filaments developed by the 3D dynamics. The gray square on the top level is an example of the damaged gap area, denoted as <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>G</mi> <mo>)</mo> </mrow> </semantics></math>, while the support where we assume to have the measurements is denoted as <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>S</mi> <mo>)</mo> </mrow> </semantics></math>, and their union defines the full 2D image, <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>I</mi> <mo>)</mo> <mo>=</mo> <mo>(</mo> <mi>S</mi> <mo>)</mo> <mo>∪</mo> <mo>(</mo> <mi>G</mi> <mo>)</mo> </mrow> </semantics></math>. A velocity contour around the most intense regions (<math display="inline"><semantics> <mrow> <mo>∥</mo> <mi mathvariant="bold-italic">u</mi> <mo>∥</mo> <mo>&gt;</mo> <mn>6.35</mn> </mrow> </semantics></math>) highlights the presence of the quasi-2D columnar structures (almost constant along <math display="inline"><semantics> <msub> <mi>x</mi> <mn>3</mn> </msub> </semantics></math>-axis), due to the effect of the Coriolis force induced by the frame rotation. (<b>b</b>) Energy spectra averaged over time. The range of scales where forcing is active is indicated by the gray band. The dashed vertical line denotes the Kolmogorov dissipative wavenumber. The reconstruction of the gappy area is based on a downsized image on a grid of <math display="inline"><semantics> <msup> <mn>64</mn> <mn>2</mn> </msup> </semantics></math> collocation points, which corresponds to a resolution of the order of <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <msub> <mi>k</mi> <mi>η</mi> </msub> </mrow> </semantics></math>. (<b>c</b>) Sketch illustration of the reconstruction protocol of a diffusion model (DM) in the backward phase (see later), which uses a Markov chain to progressively generate information through a neural network.</p>
Full article ">Figure 2
<p>Diagram of the forward process in the DM framework. Starting with the original field <math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="script">V</mi> <mi>I</mi> <mrow> <mo>(</mo> <mn>0</mn> <mo>)</mo> </mrow> </msubsup> <mo>=</mo> <msub> <mi mathvariant="script">V</mi> <mi>I</mi> </msub> </mrow> </semantics></math>, Gaussian noise is incrementally added over <span class="html-italic">N</span> diffusion steps, transforming the original <math display="inline"><semantics> <msup> <mn>64</mn> <mn>2</mn> </msup> </semantics></math> image into white noise on the same resolution grid, <math display="inline"><semantics> <msubsup> <mi mathvariant="script">V</mi> <mi>I</mi> <mrow> <mo>(</mo> <mi>N</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math>.</p>
Full article ">Figure 3
<p>Schematic representation of the DM flow field generation framework used by RePaint for flow reconstruction. (<b>a</b>,<b>b</b>) Training stage: (<b>a</b>) the neural network architecture, U-Net [<a href="#B62-atmosphere-15-00060" class="html-bibr">62</a>], that takes a noisy flow field as input at step <span class="html-italic">n</span> and predicts a denoised field at step <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>−</mo> <mn>1</mn> </mrow> </semantics></math>; (<b>b</b>) The scheme of the forward and backward diffusion Markov processes. The forward process (from right to left) incrementally adds noise over <span class="html-italic">N</span> steps, while the backward process (from left to right), modeled by the U-Net, iteratively reconstructs the flow field by denoising the noisy data. More details on the network architecture can be found in <a href="#app2-atmosphere-15-00060" class="html-app">Appendix B</a>. (<b>c</b>,<b>d</b>) Reconstruction stage starting from a <span class="html-italic">damaged</span> field with a square mask. (<b>c</b>) Conditioning the backward process with the measurement, <math display="inline"><semantics> <msub> <mi mathvariant="script">V</mi> <mi>S</mi> </msub> </semantics></math>, involves projecting the noisy state of the entire 2D field, <math display="inline"><semantics> <msubsup> <mi mathvariant="script">V</mi> <mi>I</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math>, onto the gap region, <math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="script">V</mi> <mi>I</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </msubsup> <msub> <mrow> <mo stretchy="false">|</mo> </mrow> <mi>G</mi> </msub> </mrow> </semantics></math>, and combining it with the noisy measurement, <math display="inline"><semantics> <msubsup> <mi mathvariant="script">V</mi> <mi>S</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math>, obtained from the forward process up to the corresponding step. In this way, we obtain <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi mathvariant="script">V</mi> <mo stretchy="false">˜</mo> </mover> <mi>I</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> and then enforce the conditioning by inputting it into a backward step. (<b>d</b>) A resampling approach is used to deal with the incoherence in <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi mathvariant="script">V</mi> <mo stretchy="false">˜</mo> </mover> <mi>I</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> introduced by the ‘rigid’ concatenation. First, we perform a backward step to obtain <math display="inline"><semantics> <msubsup> <mi mathvariant="script">V</mi> <mi>I</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>−</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> </semantics></math>, and then some noise is added by <span class="html-italic">j</span> forward steps (blue arrow). Finally, the field is resampled backwards by the same number of iterations, going back to the original step.</p>
Full article ">Figure 4
<p>Schematic of the DM Palette protocol. (<b>a</b>) In the backward process (from left to right), we start from pure noise in the gap, <math display="inline"><semantics> <msubsup> <mi mathvariant="script">V</mi> <mi>G</mi> <mrow> <mo>(</mo> <mi>N</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math>, combined with the measurements in the frame, <math display="inline"><semantics> <msub> <mi mathvariant="script">V</mi> <mi>S</mi> </msub> </semantics></math>, to progressively denoise the missing information using the U-Net architecture described in (<b>b</b>). (<b>b</b>) A sketch of the U-Net integrating the measurement, <math display="inline"><semantics> <msub> <mi mathvariant="script">V</mi> <mi>S</mi> </msub> </semantics></math>, and the noisy data within the gap, <math display="inline"><semantics> <msubsup> <mi mathvariant="script">V</mi> <mi>G</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math>, for a backward step.</p>
Full article ">Figure 5
<p>(<b>a</b>) The mean squared error (MSE) between the true and the generated velocity magnitude, as obtained from GAN, RePaint, and Palette, for a square gap with variable size. Error bars indicate the standard deviation. The red horizontal line represents the uncorrelated baseline MSE, <math display="inline"><semantics> <mrow> <mo>≈</mo> <mn>0.54</mn> </mrow> </semantics></math>. (<b>b</b>) The Jensen–Shannon (JS) divergence between the probability density functions (PDFs) for the true and generated velocity magnitude. The mean and error bars represent the average and range of variation of the JS divergence across 10 batches, each with 2048 samples.</p>
Full article ">Figure 6
<p>PDFs of the velocity magnitude in the missing region obtained from (<b>a</b>) GAN, (<b>b</b>) RePaint, and (<b>c</b>) Palette for a square gap of variable size <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>/</mo> <msub> <mi>l</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>24</mn> <mo>/</mo> <mn>64</mn> </mrow> </semantics></math> (triangle), <math display="inline"><semantics> <mrow> <mn>40</mn> <mo>/</mo> <mn>64</mn> </mrow> </semantics></math> (cross), and <math display="inline"><semantics> <mrow> <mn>62</mn> <mo>/</mo> <mn>64</mn> </mrow> </semantics></math> (diamond). The PDF of the true data over the whole region is plotted for reference (solid black line) and <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>(</mo> <mi>u</mi> <mo>)</mo> </mrow> </semantics></math> is the standard deviation of the original data over the full domain.</p>
Full article ">Figure 7
<p>The PDFs of the spatially averaged <math display="inline"><semantics> <msub> <mi>L</mi> <mn>2</mn> </msub> </semantics></math> error for a single flow configuration obtained from GAN, RePaint, and Palette models. The gap size changes from (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>/</mo> <msub> <mi>l</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>24</mn> <mo>/</mo> <mn>64</mn> </mrow> </semantics></math>, to (<b>b</b>) <math display="inline"><semantics> <mrow> <mn>40</mn> <mo>/</mo> <mn>64</mn> </mrow> </semantics></math> and (<b>c</b>) <math display="inline"><semantics> <mrow> <mn>62</mn> <mo>/</mo> <mn>64</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>Examples of reconstruction of an instantaneous field (velocity magnitude) for a square gap of size (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>/</mo> <msub> <mi>l</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>24</mn> <mo>/</mo> <mn>64</mn> </mrow> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>/</mo> <msub> <mi>l</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>40</mn> <mo>/</mo> <mn>64</mn> </mrow> </semantics></math> and (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>/</mo> <msub> <mi>l</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>62</mn> <mo>/</mo> <mn>64</mn> </mrow> </semantics></math>. The damaged fields are shown in the first column, while the second to fourth columns, circled by a red rectangle, show the reconstructed fields obtained from GAN, RePaint, and Palette. The ground truth is shown in the fifth column.</p>
Full article ">Figure 9
<p>(<b>a</b>) MSE and (<b>b</b>) JS divergence between the PDFs for the gradient of the original and generated velocity magnitude, as obtained from GAN, RePaint, and Palette, for a square gap with variable size. The red horizontal line in (<b>a</b>) represents the uncorrelated baseline, equal to 2. Error bars are obtained in the same way as in <a href="#atmosphere-15-00060-f005" class="html-fig">Figure 5</a>.</p>
Full article ">Figure 10
<p>The PDFs of the gradient of the reconstructed velocity magnitude in the missing region obtained from (<b>a</b>) GAN, (<b>b</b>) RePaint, and (<b>c</b>) Palette, for a square gap of variable size <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>/</mo> <msub> <mi>l</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>24</mn> <mo>/</mo> <mn>64</mn> </mrow> </semantics></math> (triangle), <math display="inline"><semantics> <mrow> <mn>40</mn> <mo>/</mo> <mn>64</mn> </mrow> </semantics></math> (cross), and <math display="inline"><semantics> <mrow> <mn>62</mn> <mo>/</mo> <mn>64</mn> </mrow> </semantics></math> (diamond). The PDF of the true data over the whole region is plotted for reference (solid black line) and <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>(</mo> <mo>∂</mo> <mi>u</mi> <mo>/</mo> <mo>∂</mo> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> </semantics></math> is the standard deviation of the original data over the full domain.</p>
Full article ">Figure 11
<p>The gradient of the velocity magnitude fields shown in <a href="#atmosphere-15-00060-f008" class="html-fig">Figure 8</a>. The first column shows the damaged fields with a square gap of size (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>/</mo> <msub> <mi>l</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>24</mn> <mo>/</mo> <mn>64</mn> </mrow> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>/</mo> <msub> <mi>l</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>40</mn> <mo>/</mo> <mn>64</mn> </mrow> </semantics></math> and (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>/</mo> <msub> <mi>l</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>62</mn> <mo>/</mo> <mn>64</mn> </mrow> </semantics></math>. Note that for the case <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>/</mo> <msub> <mi>l</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>62</mn> <mo>/</mo> <mn>64</mn> </mrow> </semantics></math>, the gap extends almost to the borders, leaving only a single vertical velocity line on both the left and right sides, where the original gradient field is missing. The gradient of the reconstructions from GAN, RePaint, and Palette, shown in the second to fourth columns, is surrounded by a red rectangle for emphasis, while the fifth column shows the ground truth.</p>
Full article ">Figure 12
<p>Energy spectra of the original velocity magnitude (solid black line) and the reconstructions obtained from (<b>a</b>) GAN, (<b>b</b>) RePaint, and (<b>c</b>) Palette for a square gap of sizes <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>/</mo> <msub> <mi>l</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>24</mn> <mo>/</mo> <mn>64</mn> </mrow> </semantics></math> (triangle), <math display="inline"><semantics> <mrow> <mn>40</mn> <mo>/</mo> <mn>64</mn> </mrow> </semantics></math> (cross), and <math display="inline"><semantics> <mrow> <mn>62</mn> <mo>/</mo> <mn>64</mn> </mrow> </semantics></math> (diamond). The corresponding <math display="inline"><semantics> <mrow> <mi>E</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>/</mo> <msup> <mi>E</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> is shown in (<b>d</b>–<b>f</b>), where <math display="inline"><semantics> <mrow> <mi>E</mi> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msup> <mi>E</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> are the spectra of the reconstructed fields and the ground truth, respectively.</p>
Full article ">Figure 13
<p>The flatness of the original field (solid black line) and the reconstructions obtained from (<b>a</b>) GAN, (<b>b</b>) RePaint, and (<b>c</b>) Palette for a square gap of sizes <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>/</mo> <msub> <mi>l</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>24</mn> <mo>/</mo> <mn>64</mn> </mrow> </semantics></math> (triangle), <math display="inline"><semantics> <mrow> <mn>40</mn> <mo>/</mo> <mn>64</mn> </mrow> </semantics></math> (cross), and <math display="inline"><semantics> <mrow> <mn>62</mn> <mo>/</mo> <mn>64</mn> </mrow> </semantics></math> (diamond).</p>
Full article ">Figure 14
<p>Probabilistic reconstructions from DMs for a fixed measurement outside a square gap with size <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>/</mo> <msub> <mi>l</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>40</mn> <mo>/</mo> <mn>64</mn> </mrow> </semantics></math> for a configuration where all models give quite small reconstruction errors. (<b>a</b>) PDFs of the spatially averaged <math display="inline"><semantics> <msub> <mi>L</mi> <mn>2</mn> </msub> </semantics></math> error over different reconstructions obtained from RePaint and Palette. The blue vertical dashed line indicates the error for the GAN case. (<b>b</b>) The damaged measurement and ground truth, inside a red box, and the prediction from GAN. (<b>c</b>) The reconstructions from RePaint with a small <math display="inline"><semantics> <msub> <mi>L</mi> <mn>2</mn> </msub> </semantics></math> error (S), the mean <math display="inline"><semantics> <msub> <mi>L</mi> <mn>2</mn> </msub> </semantics></math> error (M), and with a large <math display="inline"><semantics> <msub> <mi>L</mi> <mn>2</mn> </msub> </semantics></math> error (L). (<b>d</b>) The reconstructions from Palette corresponding to a small <math display="inline"><semantics> <msub> <mi>L</mi> <mn>2</mn> </msub> </semantics></math> error (S), the mean <math display="inline"><semantics> <msub> <mi>L</mi> <mn>2</mn> </msub> </semantics></math> error (M), and a large <math display="inline"><semantics> <msub> <mi>L</mi> <mn>2</mn> </msub> </semantics></math> error (L).</p>
Full article ">Figure 15
<p>Similar to <a href="#atmosphere-15-00060-f014" class="html-fig">Figure 14</a>, but for a flow configuration chosen for its large reconstruction errors from GAN, RePaint, and Palette. The red box contains the damaged measurement and ground truth.</p>
Full article ">
11 pages, 5180 KiB  
Article
Adsorption Mechanism of Methylene Blue on Purified Red Phosphorus and Effects of Different Temperatures on Methylene Blue Desorption
by Tiantian Chen, Jiayu Sun, Ruixue Jiang, Tongfei Zhang, Yulei Zhang and Xiaochen Li
Water 2024, 16(1), 167; https://doi.org/10.3390/w16010167 - 31 Dec 2023
Cited by 2 | Viewed by 1691
Abstract
Purified red phosphorus (RP) can be used as an adsorbent. However, the adsorption mechanism and reuse ability of purified RP have not been reported. This study utilized X-ray diffraction, Fourier transform infrared spectroscopy and scanning electron microscopy techniques (a statistical physics model and [...] Read more.
Purified red phosphorus (RP) can be used as an adsorbent. However, the adsorption mechanism and reuse ability of purified RP have not been reported. This study utilized X-ray diffraction, Fourier transform infrared spectroscopy and scanning electron microscopy techniques (a statistical physics model and the standard molar free energy of formation) to investigate the adsorption mechanism of methylene blue (MB) by purified RP. Purification did not change commercial RP structure according to X-ray diffraction. The results showed that the adsorption process only included physical adsorption according to Fourier transform infrared spectroscopy and UV–vis diffuse reflection absorption spectra. The specific areas of commercial RP and purified RP were 0.02 cm3/g and 5.27 cm3/g, respectively. Thus, purified RP has a higher adsorption capacity compared with commercial RP. A statistical physics model showed that, as the temperature increased from 288 to 308 K, the qe, Dm and qsat of purified RP for MB increased from 179.87, 0.824 and 0.824 to 303.26 mg/g, 1.497 mol/kg and 1.497 mol/kg, respectively. The fitted values of ΔrSmθ, ΔrHmθ and ΔrGmθ were 104.38 J·mol−1·K−1, −2.7 × 103 J·mol−1 and negative, respectively. Thus, according to adsorption energy, the adsorption of MB by purified RP was a spontaneous process, which was mainly driven by entropy increasing. Compared with neutral dye, the purified RP had higher adsorption ability for the cationic dye and anionic dye. As the purified RP dose increased from 30 to 150 mg, the adsorption capacity of purified RP increased. However, as the MB concentration and pH increased, the adsorption capacity of purified RP decreased. The purified RP had excellent reuse ability and high temperature desorption can be applied to obtain its reuse ability. Full article
(This article belongs to the Section Water Quality and Contamination)
Show Figures

Figure 1

Figure 1
<p>XRD patterns of (<b>a</b>) commercial RP purified RP before and after adsorption of MB and (<b>b</b>) FTIR spectra of purified RP before and after adsorption of MB. Conditions: the concentration of MB = 90 mg/L, the volume of MB solution = 100 mL, the weight of purified RP = 40 mg, the absorption time = 48 h, T = 298 K, pH = 7.</p>
Full article ">Figure 2
<p>SEM images of commercial RP (<b>a</b>,<b>b</b>) and purified RP (<b>c</b>,<b>d</b>).</p>
Full article ">Figure 3
<p>Effects of different temperatures on <span class="html-italic">D</span><sub>m</sub> (<b>a</b>), <span class="html-italic">q</span><sub>sat</sub> (<b>b</b>) and Δ<sub>r</sub><span class="html-italic">G</span><sub>m</sub><sup>θ</sup> (<b>c</b>) of purified RP for MB adsorption. Conditions: T = 288, 298 and 308 K, the concentration of MB = 40–160 mg/L, the volume of MB solution = 100 mL, the weight of purified RP = 40 mg, pH = 7.</p>
Full article ">Figure 4
<p>Effects of different dyes (MB, CR and anionic dyes were selected) on removal. (<b>a</b>) Conditions: the concentration of dye = 90 mg/L, the volume of dye solution = 100 mL, the weight of purified RP = 40 mg, T = 298 K, pH = 7. Effect of purified RP dose on removal. (<b>b</b>) Conditions: the weight of purified RP = 20, 30, 40, 50 and 60 mg, the concentration of MB = 90 mg/L, the volume of MB solution = 100 mL, T = 298 K, pH = 7. Effect of MB concentration on removal. (<b>c</b>) Conditions: the concentration of MB = 30, 60, 90, 120 and 150 mg/L, the volume of MB solution = 100 mL, the weight of purified RP = 40 mg, T = 298 K, pH = 7. Effect of pH on removal. (<b>d</b>) Conditions: pH = 3, 5, 7, 9 and 11, the concentration of MB = 90 mg/L, the volume of MB solution = 100 mL, the weight of purified RP = 40 mg, T = 298 K.</p>
Full article ">Figure 5
<p>Effects of different temperatures on MB desorption. (<b>a</b>) Conditions: T = 278, 298, 318 and 338 K, the concentration of MB = 90 mg/L, the volume of MB solution = 100 mL, the weight of purified RP = 40 mg, pH = 7. Recycles of purified RP for the adsorption of MB. (<b>b</b>) Conditions: the concentration of MB = 90 mg/L, the volume of MB solution = 100 mL, the weight of purified RP = 40 mg, T = 308 K, pH = 7.</p>
Full article ">
20 pages, 7601 KiB  
Article
Sources of Metallogenic Materials of the Saima Alkaline Rock-Hosted Niobium–Tantalum Deposit in the Liaoning Region: Evidence from the Sr-Nd-Pb and Li Isotopes
by Yue Wu, Nan Ju, Xin Liu, Lu Shi, Yuhui Feng and Danzhen Ma
Minerals 2023, 13(11), 1443; https://doi.org/10.3390/min13111443 - 15 Nov 2023
Cited by 1 | Viewed by 1248
Abstract
The Saima alkaline rock-hosted niobium–tantalum deposit (hereafter referred to as the Saima Deposit) is situated in the Liaodong Peninsula, which constitutes the eastern segment of the northern margin of the North China Craton. The rock types of the Saima Deposit include phonolite, nepheline [...] Read more.
The Saima alkaline rock-hosted niobium–tantalum deposit (hereafter referred to as the Saima Deposit) is situated in the Liaodong Peninsula, which constitutes the eastern segment of the northern margin of the North China Craton. The rock types of the Saima Deposit include phonolite, nepheline syenite, and aegirine nepheline syenite, which hosts niobium–tantalum ore bodies. In this study, the primary niobium-bearing minerals identified include loparite, betafite, and fersmite. The Saima pluton is characterized as a potassium-rich, low-sodium, and peraluminous alkaline pluton. Trace element characteristics reveal that the metallization-associated syenite is enriched in large-ion lithophile elements (LILEs) such as K and Rb but is relatively depleted in high-field strength elements (HFSEs). As indicated by the rare earth element (REE) profile, the Saima pluton exhibits a high total REE content (∑REE), dominance of light REEs (LREEs), and scarcity of heavy REEs (HREEs). The Sr-Nd-Pd isotopic data suggest that aegirine nepheline syenite and nepheline syenite share consistent isotopic signatures, indicating a common origin. The Saima alkaline pluton displays elevated ISr values ranging from 0.70712 to 0.70832 coupled with low εNd(t) values between −12.84 and −11.86 and two-stage model ages (tDM2) from 1967 to 2047 Ma. These findings indicate that the metallogenic materials for the Saima Deposit derive from both an enriched mantle source and some crustal components. The lithium (Li) isotopic fractionation observed during the genesis of the Saima pluton could be attributed to the differential diffusion rates of 6Li and 7Li under non-equilibrium fluid–rock interactions. Full article
Show Figures

Figure 1

Figure 1
<p>Geotenctonic location map. (<b>A</b>) Geological sketch map (modified from reference [<a href="#B9-minerals-13-01443" class="html-bibr">9</a>]). (<b>B</b>) Distribution map of rare earth deposits in northeast China (Modified from reference [<a href="#B9-minerals-13-01443" class="html-bibr">9</a>]). (<b>C</b>) Geological map of the Saima deposit (Modified from reference [<a href="#B5-minerals-13-01443" class="html-bibr">5</a>]). 1. Quaternary alluvium; 2. Jurassic Beimiao formation; 3. Huaziyu formation of Liaohe group; 4. Late Triassic Saima diamictite; 5. Late Triassic nepheline syenite; 6. Late Triassic biotite–nepheline syenite; 7. Angular unconformity; 8. parallel displacement fault; 9. Nb ore body and number; 10. Jurassic Zhuanshanzi formation; 11. Wangjiagou rockbody of the Liaohe group; 12. Late-Triassic brown ijolite syenite; 13. Late Triassic aegirine syenite; 14. Late Triassic grass—green aegirine ijolite syenite; 15. Late Triassic intrusive rock (nepheline phonolite); 16. Supposed fault; 17. Sample location.</p>
Full article ">Figure 2
<p>Field and photomicrographs of the Saima deposit. (<b>a</b>) Coarse medium-grained biotite nepheline syenite. (<b>b</b>) Aegirine nepheline syenite. (<b>c</b>) Aegirine nepheline syenite. (<b>d</b>) Biotite nepheline syenite.</p>
Full article ">Figure 3
<p>Backscatter images of niobium-bearing minerals. (<b>a</b>,<b>b</b>) Fersmite; (<b>c</b>,<b>d</b>) Betafite; (<b>e</b>,<b>f</b>) Loparite.</p>
Full article ">Figure 4
<p>Geochemical diagrams showing the major elements of aegirine nepheline syenite and nepheline syenite in the Saima Deposit. (<b>a</b>) TAS diagram (after [<a href="#B29-minerals-13-01443" class="html-bibr">29</a>]). (<b>b</b>) A/NK-A/CNK diagram (after [<a href="#B30-minerals-13-01443" class="html-bibr">30</a>]). (<b>c</b>) K<sub>2</sub>O-SiO<sub>2</sub> diagram (after [<a href="#B31-minerals-13-01443" class="html-bibr">31</a>]). (<b>d</b>) FeOt/(FeOt + MgO)-SiO<sub>2</sub> diagram of ore-forming plutons of the Saima and Baerzhe deposits (after [<a href="#B32-minerals-13-01443" class="html-bibr">32</a>]).</p>
Full article ">Figure 5
<p>Primitive mantle-normalized trace element spider diagrams (<b>a</b>) and chondrite-normalized REE patterns (<b>b</b>) of the aegirine nepheline syenite and nepheline syenite in the Saima deposit [<a href="#B38-minerals-13-01443" class="html-bibr">38</a>]. The grey field is the data from Ju [<a href="#B9-minerals-13-01443" class="html-bibr">9</a>].</p>
Full article ">Figure 6
<p>Magmatic source area diagrams of ore-forming rock plutons in the Saima (<b>a</b>) modified from [<a href="#B49-minerals-13-01443" class="html-bibr">49</a>]; (<b>b</b>,<b>c</b>) modified from [<a href="#B50-minerals-13-01443" class="html-bibr">50</a>]) (EMI, EMII, HIMU and Primitive Mantle after [<a href="#B51-minerals-13-01443" class="html-bibr">51</a>]; lower crust, mature arc and upper crust after Zartman et al. [<a href="#B50-minerals-13-01443" class="html-bibr">50</a>]).</p>
Full article ">Figure 7
<p>Plots of Li isotope fractionation: Nb/Ta-Zr/Hf (<b>a</b>), whole-rock δ<sup>7</sup>Li versus Li content (<b>b</b>), whole-rock δ<sup>7</sup>Li-Nb/Ta (<b>c</b>), and whole-rock δ<sup>7</sup>Li-Zr/Hf plots (<b>d</b>).</p>
Full article ">
22 pages, 3798 KiB  
Article
Assessment of Nitrate in Groundwater from Diffuse Sources Considering Spatiotemporal Patterns of Hydrological Systems Using a Coupled SWAT/MODFLOW/MT3DMS Model
by Alejandra Correa-González, Joel Hernández-Bedolla, Marco Antonio Martínez-Cinco, Sonia Tatiana Sánchez-Quispe and Mario Alberto Hernández-Hernández
Hydrology 2023, 10(11), 209; https://doi.org/10.3390/hydrology10110209 - 9 Nov 2023
Cited by 4 | Viewed by 2503
Abstract
In recent years, due to various anthropogenic activities, such as agriculture and livestock, the presence of nitrogen-associated contaminants has been increasing in surface- and groundwater resources. Among these, the main compounds present in groundwater are ammonia, nitrite, and nitrate. However, it is sometimes [...] Read more.
In recent years, due to various anthropogenic activities, such as agriculture and livestock, the presence of nitrogen-associated contaminants has been increasing in surface- and groundwater resources. Among these, the main compounds present in groundwater are ammonia, nitrite, and nitrate. However, it is sometimes difficult to assess such effects given the scarcity or lack of information and the complexity of the system. In the current study, a methodology is proposed to assess nitrate in groundwater from diffuse sources considering spatiotemporal patterns of hydrological systems using a coupled SWAT/MODFLOW/MT3DMS model. The application of the model is carried out using a simplified simulation scheme of hydrological and agricultural systems because of the limited spatial and temporal data. The study area includes the Cuitzeo Lake basin in superficial flow form and the Morelia–Querendaro aquifer in groundwater flow form. The results within the methodology are surface runoff, groundwater levels, and nitrate concentrations present in surface- and groundwater systems. The results indicate that the historical and simulated nitrate concentrations were obtained within acceptable values of the statistical parameters and, therefore, are considered adequate. Full article
(This article belongs to the Special Issue Groundwater Pollution: Sources, Mechanisms, and Prevention)
Show Figures

Figure 1

Figure 1
<p>Agricultural land distribution in the basin.</p>
Full article ">Figure 2
<p>Percentage of crops of total agriculture in LCB—main crops.</p>
Full article ">Figure 3
<p>Geology of the Morelia–Querendaro aquifer, obtained from National Institute of Statistics and Geography (INEGI; <a href="https://www.inegi.org.mx/temas/" target="_blank">https://www.inegi.org.mx/temas/</a>; accessed on 28 October 2023).</p>
Full article ">Figure 4
<p>Coupling of mathematical models. Quantitative and qualitative inputs and outputs of each model.</p>
Full article ">Figure 5
<p>Monthly average recharge of the Morelia–Querendaro aquifer; 1 hm<sup>3</sup> = 1,000,000 m<sup>3</sup>.</p>
Full article ">Figure 6
<p>Annual and monthly average nitrate load entering the Morelia–Querendaro aquifer. Amounts are shown in 10<sup>3</sup> tons of nitrogen.</p>
Full article ">Figure 7
<p>Quantitative and qualitative inputs and outputs of each model.</p>
Full article ">Figure 8
<p>Historical and modeled flow series, calibrated. (<b>A</b>) Point S1, period 1960–1989. (<b>B</b>) Point S2, period 1960–2002. (<b>C</b>) Point S3. Validation, period 2000–2002. 1 hm<sup>3</sup> = 1,000,000 m<sup>3</sup>.</p>
Full article ">Figure 9
<p>Point S3, calibrated. Historical and modeled nitrate concentrations, period 2008–2009.</p>
Full article ">Figure 10
<p>Historical and simulated groundwater level. (<b>A</b>) Point G4, period 1970–2010. (<b>B</b>) Point G5, period 1970–2010.</p>
Full article ">Figure 11
<p>Nitrate concentrations in groundwater. (<b>A</b>) Point NG1. (<b>B</b>) Point NG3. (<b>C</b>) Point NG4. (<b>D</b>) Point NG5. Period simulated: 1970–2010.</p>
Full article ">
Back to TopTop