Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (7,647)

Search Parameters:
Keywords = annotation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
10 pages, 1746 KiB  
Technical Note
MOTH: Memory-Efficient On-the-Fly Tiling of Histological Image Annotations Using QuPath
by Thomas Kauer, Jannik Sehring, Kai Schmid, Marek Bartkuhn, Benedikt Wiebach, Slaven Crnkovic, Grazyna Kwapiszewska, Till Acker and Daniel Amsel
J. Imaging 2024, 10(11), 292; https://doi.org/10.3390/jimaging10110292 - 15 Nov 2024
Abstract
The emerging usage of digitalized histopathological images is leading to a novel possibility for data analysis. With the help of artificial intelligence algorithms, it is now possible to detect certain structures and morphological features on whole slide images automatically. This enables algorithms to [...] Read more.
The emerging usage of digitalized histopathological images is leading to a novel possibility for data analysis. With the help of artificial intelligence algorithms, it is now possible to detect certain structures and morphological features on whole slide images automatically. This enables algorithms to count, measure, or evaluate those areas when trained properly. To achieve suitable training, datasets must be annotated and curated by users in programs like QuPath. The extraction of this data for artificial intelligence algorithms is still rather tedious and needs to be saved on a local hard drive. We developed a toolkit for integration into existing pipelines and tools, like U-net, for the on-the-fly extraction of annotation tiles from existing QuPath projects. The tiles can be directly used as input for artificial intelligence algorithms, and the results are directly transferred back to QuPath for visual inspection. With the toolkit, we created a convenient way to incorporate QuPath into existing AI workflows. Full article
Show Figures

Figure 1

Figure 1
<p>MOTH overview. MOTH is a suite of tools that facilitates the import and export of annotations and images from and into QuPath. The system is capable of establishing a connection to local AI-based algorithms.</p>
Full article ">Figure 2
<p>(<b>A</b>,<b>B</b>) IoU and HD of exported shapes rendered with MOTH and Groovy in the artificial dataset. (<b>C</b>,<b>D</b>) IoU and HD of exported shapes rendered with MOTH and Groovy in the mitosis dataset. Groovy results are marked in orange and MOTH results are marked in green. Diamonds represent outliers.</p>
Full article ">Figure 3
<p>MOTH export of small shapes with pixel offsets. The figure shows the export of small ground truth shapes. The ground truth shapes are drawn as orange lines and the center of the shape is marked by an orange dot. Black areas are pixels set in the MOTH export. A high overlap with the ground truth shapes can be observed.</p>
Full article ">Figure 4
<p>Groovy export of small shapes with pixel offsets. The figure shows the export of small ground truth shapes. In comparison to the previous figure, a lower overlap between the ground truth and the export is visible.</p>
Full article ">Figure 5
<p>Real world example using MOTH. The proposals are generated via QuPath and extracted from the project via MOTH. The proposals are evaluated and improved via custom methods and loaded back into QuPath for visual inspection using MOTH.</p>
Full article ">
12 pages, 4410 KiB  
Article
Whole-Genome Sequence and Characterization of Ralstonia solanacearum MLY102 Isolated from Infected Tobacco Stalks
by Guan Lin, Juntao Gao, Junxian Zou, Denghui Li, Yu Cui, Yong Liu, Lingxue Kong and Shiwang Liu
Genes 2024, 15(11), 1473; https://doi.org/10.3390/genes15111473 - 15 Nov 2024
Viewed by 247
Abstract
Background/Objectives: Bacterial wilt disease is a soil-borne disease caused by Ralstonia solanacearum that causes huge losses to crop economies worldwide. Methods: In this work, strain MLY102 was isolated and further identified as R. solanacearum from a diseased tobacco stalk. The genomic properties of [...] Read more.
Background/Objectives: Bacterial wilt disease is a soil-borne disease caused by Ralstonia solanacearum that causes huge losses to crop economies worldwide. Methods: In this work, strain MLY102 was isolated and further identified as R. solanacearum from a diseased tobacco stalk. The genomic properties of MLY102 were explored by performing biochemical characterization, genome sequencing, compositional analysis, functional annotation and comparative genomic analysis. Results: MLY102 had a pinkish-red color in the center of the colony surrounded by a milky-white liquid with fluidity on TTC medium. The biochemical results revealed that MLY102 can utilize carbon sources, including D-glucose (dGLU), cane sugar (SAC) and D-trehalose dihydrate (dTRE). Genome sequencing through the DNBSEQ and PacBio platforms revealed a genome size of 5.72 Mb with a G+C content of 67.59%. The genome consists of a circular chromosome and a circular giant plasmid with 5283 protein-coding genes. A comparison of the genomes revealed that MLY102 is closely related to GMI1000 and CMR15 but has 498 special genes and 13 homologous genes in the species-specific gene family, indicating a high degree of genomic uniqueness. Conclusions: The unique characteristics and genomic data of MLY102 can provide important reference values for the prevention and control of bacterial wilt disease. Full article
(This article belongs to the Section Plant Genetics and Genomics)
Show Figures

Figure 1

Figure 1
<p>Colony appearance (<b>A</b>) and cellular morphology (<b>B</b>) of <span class="html-italic">R. solanacearum</span> MLY91.</p>
Full article ">Figure 2
<p>Circle graphs of the <span class="html-italic">R. solanacearum</span> MLY102 chromosome (<b>A</b>) and giant plasmid (<b>B</b>).</p>
Full article ">Figure 3
<p>Distribution of gene lengths of <span class="html-italic">R. solanacearum</span> MLY102.</p>
Full article ">Figure 4
<p>Functional classification of genes in <span class="html-italic">R. solanacearum</span> MLY102. (<b>A</b>): COG function classification; (<b>B</b>): GO function classification; (<b>C</b>): KEGG function classification.</p>
Full article ">Figure 5
<p>Heatmap of average nucleotide identity (ANI) between eleven strains of <span class="html-italic">R. solanacearum</span>.</p>
Full article ">Figure 6
<p>Heatmap of dispensable genes (<b>A</b>), Venn diagram of the pan gene set (<b>B</b>) and statistics of the number of homologous genes in the gene families (<b>C</b>) of <span class="html-italic">R. solanacearum</span> MLY102 and the reference strains.</p>
Full article ">Figure 7
<p>Phylogenetic trees of <span class="html-italic">R. solanacearum</span> MLY102 and reference strains based on core pan (<b>A</b>) and gene family (<b>B</b>) results.</p>
Full article ">
28 pages, 1861 KiB  
Article
Human Operator Mental Fatigue Assessment Based on Video: ML-Driven Approach and Its Application to HFAVD Dataset
by Walaa Othman, Batol Hamoud, Nikolay Shilov and Alexey Kashevnik
Appl. Sci. 2024, 14(22), 10510; https://doi.org/10.3390/app142210510 - 14 Nov 2024
Viewed by 449
Abstract
The detection of the human mental fatigue state holds immense significance due to its direct impact on work efficiency, specifically in system operation control. Numerous approaches have been proposed to address the challenge of fatigue detection, aiming to identify signs of fatigue and [...] Read more.
The detection of the human mental fatigue state holds immense significance due to its direct impact on work efficiency, specifically in system operation control. Numerous approaches have been proposed to address the challenge of fatigue detection, aiming to identify signs of fatigue and alert the individual. This paper introduces an approach to human mental fatigue assessment based on the application of machine learning techniques to the video of a working operator. For validation purposes, the approach was applied to a dataset, “Human Fatigue Assessment Based on Video Data” (HFAVD) integrating video data with features computed by using our computer vision deep learning models. The incorporated features encompass head movements represented by Euler angles (roll, pitch, and yaw), vital signs (blood pressure, heart rate, oxygen saturation, and respiratory rate), and eye and mouth states (blinking and yawning). The integration of these features eliminates the need for the manual calculation or detection of these parameters, and it obviates the requirement for sensors and external devices, which are commonly employed in existing datasets. The main objective of our work is to advance research in fatigue detection, particularly in work and academic settings. For this reason, we conducted a series of experiments by utilizing machine learning techniques to analyze the dataset and assess the fatigue state based on the features predicted by our models. The results reveal that the random forest technique consistently achieved the highest accuracy and F1-score across all experiments, predominantly exceeding 90%. These findings suggest that random forest is a highly promising technique for this task and prove the strong connection and association among the predicted features used to annotate the videos and the state of fatigue. Full article
Show Figures

Figure 1

Figure 1
<p>Timeline of each session.</p>
Full article ">Figure 2
<p>Models used to label the videos.</p>
Full article ">Figure 3
<p>The overall scheme used for detecting the fatigue state.</p>
Full article ">Figure 4
<p>The relationship between the mental performance and the inverse of fatigue (red dotted line denotes an example threshold value separating fatigued and not fatigued states).</p>
Full article ">Figure 5
<p>The relationship between the threshold and the F1-score.</p>
Full article ">
34 pages, 12661 KiB  
Article
Discovery of Alanomyces manoharacharyi: A Novel Fungus Identified Using Genome Sequencing and Metabolomic Analysis
by Shiwali Rana and Sanjay K. Singh
J. Fungi 2024, 10(11), 791; https://doi.org/10.3390/jof10110791 - 14 Nov 2024
Viewed by 219
Abstract
In this study, a new species of Alanomyces was isolated as an endophyte from the bark of Azadirachta indica from Mulshi, Maharashtra. The identity of this isolate was confirmed based on the asexual morphological characteristics as well as multi-gene phylogeny based on the [...] Read more.
In this study, a new species of Alanomyces was isolated as an endophyte from the bark of Azadirachta indica from Mulshi, Maharashtra. The identity of this isolate was confirmed based on the asexual morphological characteristics as well as multi-gene phylogeny based on the internal transcribed spacer (ITS) and large subunit (LSU) nuclear ribosomal RNA (rRNA) regions. As this was the second species to be reported in this genus, we sequenced the genome of this species to increase our knowledge about the possible applicability of this genus to various industries. Its genome length was found to be 35.01 Mb, harboring 7870 protein-coding genes as per Augustus and 8101 genes using GeMoMa. Many genes were annotated using the Clusters of Orthologous Groups (COGs) database, the Kyoto Encyclopedia of Genes and Genomes (KEGG), Gene Ontology (GO), Swiss-Prot, NCBI non-redundant nucleotide sequences (NTs), and NCBI non-redundant protein sequences (NRs). The number of repeating sequences was predicted using Proteinmask and RepeatMasker; tRNA were detected using tRNAscan and snRNA were predicted using rfam_scan. The genome was also annotated using the Pathogen–Host Interactions Database (PHI-base) and AntiSMASH. To confirm the evolutionary history, average nucleotide identity (ANIb), phylogeny based on orthologous proteins, and single nucleotide polymorphisms (SNPs) were carried out. Metabolic profiling of the methanolic extract of dried biomass and ethyl acetate extract of the filtrate revealed a variety of compounds of great importance in the pharmaceutical and cosmetic industry. The characterization and genomic analysis of the newly discovered species Alanomyces manoharacharyi highlights its potential applicability across multiple industries, particularly in pharmaceuticals and cosmetics due to its diverse secondary metabolites and unique genetic features it possesses. Full article
(This article belongs to the Special Issue Taxonomy, Systematics and Evolution of Forestry Fungi, 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Molecular phylogenetic analysis of the new species <span class="html-italic">Alanomyces manoharacharyi</span> based on the ML method using combined ITS and LSU sequence data. The new species is shown in blue. Statistical support values of 70% or more are displayed next to each node and UFBS values and SH−aLRT are obtained from 1000 replicates using IQ−TREE and the TIM2e + I + G4 model.</p>
Full article ">Figure 2
<p>Colonies on various media after 10 days. (<b>A</b>,<b>B</b>) MEA; (<b>C</b>,<b>D</b>) V8 juice agar; (<b>E</b>,<b>F</b>) CMA; (<b>G</b>,<b>H</b>) RBA; (<b>I</b>,<b>J</b>) CDA; (<b>K</b>,<b>L</b>) PCA; (<b>M</b>,<b>N</b>) SDA; (<b>O</b>,<b>P</b>) PDA; (<b>A</b>,<b>C</b>,<b>E</b>,<b>G</b>,<b>I</b>,<b>K</b>,<b>M</b>,<b>O</b>) front view; (<b>B</b>,<b>D</b>,<b>F</b>,<b>H</b>,<b>J</b>,<b>L</b>,<b>N</b>,<b>P</b>) reverse view.</p>
Full article ">Figure 3
<p><span class="html-italic">Alanomyces manoharacharyi</span> NFCCI 5738; (<b>A</b>–<b>D</b>) Hyphae; (<b>E</b>) Hyphae showing anastomosis; (<b>F</b>,<b>G</b>) Conidiomata; (<b>H</b>) Ruptured conidiomata; (<b>I</b>) Ruptured conidiomata showing numerous dense conidiophores; the black arrow shows ampulliform conidiogenous cells; the white arrow shows short, stumpy conidiophores; (<b>J</b>) Ruptured conidiomata with numerous conidia; (<b>K</b>–<b>M</b>) Conidia. Bar = 20 µm (<b>A</b>–<b>K</b>), 10 µm (<b>L</b>,<b>M</b>).</p>
Full article ">Figure 4
<p>MALDI-TOF MS spectra of <span class="html-italic">Alanomyces manoharacharyi</span> NFCCI 5738 indicating the protein profile (2–20 KD).</p>
Full article ">Figure 5
<p>Genome diagram of <span class="html-italic">Alanomyces manoharacharyi</span> NFCCI 5738; A: Contig; B: Negative Gene; C: Positive Gene; D: Reference Map with <span class="html-italic">Aplosporella punicola</span> CBS 121167; E: Signal Peptide with cleavage sites (Signal LIP); F: Repeat regions; G: rRNA Genes; H: GC variation and I: GC skew.</p>
Full article ">Figure 6
<p>Functional annotation of <span class="html-italic">Alanomyces manoharacharyi</span> NFCCI 5738 genes encoding for proteins using the Clusters of Orthologous Genes (COGs) database.</p>
Full article ">Figure 7
<p>Functional annotation of <span class="html-italic">Alanomyces manoharacharyi</span> NFCCI 5738 genes encoding for proteins using Kyoto Encyclopedia of Genes and Genomes (KEGG) analysis.</p>
Full article ">Figure 8
<p>Functional annotation of <span class="html-italic">Alanomyces manoharacharyi</span> NFCCI 5738 predicted genes encoding for proteins using Gene Ontology (GO) analysis; Red bars represent biological processes, blue bars represent cellular component and green represent molecular function.</p>
Full article ">Figure 9
<p>Carbohydrate-active enzyme (CAZyme) functional classification and corresponding genes present in the <span class="html-italic">Alanomyces manoharacharyi</span> NFCCI 5738 genome. (<b>A</b>): Carbohydrate-active enzyme functional classes; (<b>B</b>): Carbohydrate-active enzyme functional subclasses.</p>
Full article ">Figure 10
<p>Distribution map of mutation types in the pathogen PHI phenotype of <span class="html-italic">Alanomyces manoharacharyi</span> NFCCI 5738.</p>
Full article ">Figure 11
<p>Comparison of biosynthetic gene cluster components in <span class="html-italic">Alanomyces manoharacharyi</span> NFCCI 5738 with known biosynthetic gene clusters for the biosynthesis of (<b>A</b>) Patulin; (<b>B</b>) Tetrahydroxynaphthalene; (<b>C</b>) Biotin; (<b>D</b>) Aspterric acid; (<b>E</b>) Mellein; (<b>F</b>) Chaetocin; (<b>G</b>) Viridicatumtoxin; (<b>H</b>) Cryptosporioptide; (<b>I</b>) Phomasetin; and (<b>J</b>) Dimerum acid.</p>
Full article ">Figure 12
<p>Heatmap of ANIb percentage identity between the allied genera strains compared with the <span class="html-italic">Alanomyces manoharacharyi</span> NFCCI 5738. ANIb analysis was carried out for all 55 genomes calculated based on genome sequences.</p>
Full article ">Figure 13
<p>Phylogenetic analysis of 55 taxa of <span class="html-italic">Alanomyces manoharacharyi</span> NFCCI 5738 and allied taxa based on the orthologous proteins identified using OrthoFinder. The new species is shown in blue. Only the bootstrap values higher than 70 are shown.</p>
Full article ">Figure 14
<p>The maximum phylogenetic tree is based on the 130874 core genome SNPs identified using Panseq. The number of bootstraps is indicated as well. Only the bootstrap values higher than 70 are shown. The new species is shown in blue.</p>
Full article ">Figure 15
<p>Results of TargetP analysis. Cumulative count of predicted proteins containing a signal peptide (SP), mitochondrial translocation signal (mTP), and no-targeting peptides (other).</p>
Full article ">Figure 16
<p>LC–MS analysis of extracts from <span class="html-italic">Alanomyces manoharacharyi</span> NFCCI 5738 for the identification of constituents. (<b>A</b>) Methanolic extract, Positive ion mode; (<b>B</b>) Ethyl acetate extract, Positive ion mode; (<b>C</b>) Methanolic extract, Negative ion mode; (<b>D</b>) Ethyl acetate extract, Negative ion mode.</p>
Full article ">Figure 17
<p>Metabolites identified from the methanolic extract of biomass and the ethyl acetate extract of the filtrate <span class="html-italic">Alanomyces manoharacharyi</span> NFCCI 5738 using LC–MS in positive and negative ion mode.</p>
Full article ">
23 pages, 4854 KiB  
Article
Ensemble Network-Based Distillation for Hyperspectral Image Classification in the Presence of Label Noise
by Youqiang Zhang, Ruihui Ding, Hao Shi, Jiaxi Liu, Qiqiong Yu, Guo Cao and Xuesong Li
Remote Sens. 2024, 16(22), 4247; https://doi.org/10.3390/rs16224247 - 14 Nov 2024
Viewed by 285
Abstract
Deep learning has made remarkable strides in hyperspectral image (HSI) classification, significantly improving classification performance. However, the challenge of obtaining accurately labeled training samples persists, primarily due to the subjectivity of human annotators and their limited domain knowledge. This often results in erroneous [...] Read more.
Deep learning has made remarkable strides in hyperspectral image (HSI) classification, significantly improving classification performance. However, the challenge of obtaining accurately labeled training samples persists, primarily due to the subjectivity of human annotators and their limited domain knowledge. This often results in erroneous labels, commonly referred to as label noise. Such noisy labels can critically impair the performance of deep learning models, making it essential to address this issue. While previous studies focused on label noise filtering and label correction, these approaches often require estimating noise rates and may inadvertently propagate noisy labels to clean labels, especially in scenarios with high noise levels. In this study, we introduce an ensemble network-based distillation (END) method specifically designed to address the challenges posed by label noise in HSI classification. The core idea is to leverage multiple base neural networks to generate an estimated label distribution from the training data. This estimated distribution is then used alongside the ground-truth labels to train the target network effectively. Moreover, we propose a parameter-adaptive loss function that balances the impact of both the estimated and ground-truth label distributions during the training process. Our approach not only simplifies architectural requirements but also integrates seamlessly into existing deep learning frameworks. Comparative experiments on four hyperspectral datasets demonstrate the effectiveness of our method, highlighting its competitive performance in the presence of label noise. Full article
Show Figures

Figure 1

Figure 1
<p>The framework of the proposed END method. First, <span class="html-italic">T</span>-based neural networks are trained on resampling datasets. Next, the estimated label distribution of the training data is computed by predicting out-of-bag (OOB) samples. Finally, the estimated distribution (ED) is combined with the ground-truth distribution (GD) to train a student network (S).</p>
Full article ">Figure 2
<p>False-color image and reference map of the SV. (<b>a</b>) False-color image created from a combination of three spectral bands. (<b>b</b>) Reference map detailing the ground truth classification.</p>
Full article ">Figure 3
<p>False-color image and reference map of the HOU. (<b>a</b>) False-color image created from a combination of three spectral bands. (<b>b</b>) Reference map detailing the ground truth classification.</p>
Full article ">Figure 4
<p>False-color image and reference map of the PU. (<b>a</b>) False-color image created from a combination of three spectral bands. (<b>b</b>) Reference map detailing the ground truth classification.</p>
Full article ">Figure 5
<p>Classification maps generated for the SV image utilizing various comparison methods. (<b>a</b>) RF: OA = 86.66. (<b>b</b>) 2D-CNN: OA = 87.29%. (<b>c</b>) SLS: OA = 89.71%. (<b>d</b>) MSSG: OA = 92.53%. (<b>e</b>) MLN: OA = 92.98%. (<b>f</b>) DCRN: OA = 94.36%. (<b>g</b>) AAN: OA = 95.14%. (<b>h</b>) TCRL: OA = 95.78%. (<b>i</b>) END: OA = 97.30%.</p>
Full article ">Figure 6
<p>Classification maps generated for the HOU image utilizing various comparison methods. (<b>a</b>) RF: OA = 77.61%. (<b>b</b>) 2D-CNN: OA = 79.36%. (<b>c</b>) MSSG: OA = 82.12%. (<b>d</b>) SLS: OA = 83.02%. (<b>e</b>) MLN: OA = 84.24%. (<b>f</b>) DCRN: OA = 84.47%. (<b>g</b>) AAN: OA = 87.79%. (<b>h</b>) TCRL: OA = 88.47%. (<b>i</b>) END: OA = 90.77%.</p>
Full article ">Figure 7
<p>Classification maps generated for the PU image utilizing various comparison methods. (<b>a</b>) RF: OA = 80.26%. (<b>b</b>) 2D-CNN: OA = 84.21%. (<b>c</b>) SLS: OA = 88.16%. (<b>d</b>) MSSG: OA = 89.65%. (<b>e</b>) MLN: OA = 90.36%. (<b>f</b>) DCRN: OA = 94.26%. (<b>g</b>) AAN: OA = 95.07%. (<b>h</b>) TCRL: OA = 95.98%. (<b>i</b>) END: OA = 96.94%.</p>
Full article ">Figure 8
<p>OA curves of the END method with different versions of loss functions plotted against the number of training samples. (<b>a</b>) SV; (<b>b</b>) HOU; (<b>c</b>) HOU.</p>
Full article ">Figure 9
<p>OA curves of SV, HOU, and PU datasets under different ensemble sizes.</p>
Full article ">Figure 10
<p>Classification accuracy curves of SV, HOU, and PU datasets with the increase in training epochs.</p>
Full article ">
21 pages, 3253 KiB  
Article
Gene Expression Comparison Between the Injured Tubercule Skin of Turbot (Scophthalmus maximus) and the Scale Skin of Brill (Scophthalmus rhombus)
by João Estêvão, Andrés Blanco-Hortas, Juan A. Rubiolo, Óscar Aramburu, Carlos Fernández, Antonio Gómez-Tato, Deborah M. Power and Paulino Martínez
Fishes 2024, 9(11), 462; https://doi.org/10.3390/fishes9110462 - 14 Nov 2024
Viewed by 251
Abstract
Turbot and brill are two congeneric commercial flatfish species with striking differences in skin organization. The calcified appendages in turbot skin are conical tubercles, while in brill, they are elasmoid scales. A skin injury involving epidermal and dermal levels was evaluated 72 h [...] Read more.
Turbot and brill are two congeneric commercial flatfish species with striking differences in skin organization. The calcified appendages in turbot skin are conical tubercles, while in brill, they are elasmoid scales. A skin injury involving epidermal and dermal levels was evaluated 72 h post-injury to compare the skin regeneration processes between both species. An immune-enriched 4x44k turbot oligo-microarray was used to characterize the skin transcriptome and gene expression profiles in both species. RNA-seq was also performed on the brill samples to improve transcriptome characterization and validate the microarray results. A total of 15,854 and 12,447 expressed genes were identified, respectively, in the turbot and brill skin (10,101 shared) using the oligo-microarray (11,953 and 9629 annotated). RNA-seq enabled the identification of 11,838 genes in brill skin (11,339 annotated). Functional annotation of skin transcriptomes was similar in both species, but in turbot, it was enriched on mechanisms related to maintenance of epithelial structure, mannosidase activity, phospholipid binding, and cell membranes, while in brill, it was enriched on biological and gene regulation mechanisms, tissue development, and transferase and catalytic activities. The number of DEGs identified after skin damage in brill and turbot was 439 and 143, respectively (only 14 shared). Functions related to catabolic and metabolic processes, visual and sensorial perception, response to wounding, and wound healing were enriched in turbot DEGs, while metabolism, immune response, oxidative stress, phospholipid binding, and response to stimulus were enriched in brill. The results indicate that differences may be related to the stage of wound repair due to their different skin architecture. This work provides a foundation for future studies directed at skin defense mechanisms, with practical implications in flatfish aquaculture. Full article
(This article belongs to the Section Biology and Ecology)
Show Figures

Figure 1

Figure 1
<p>Functional annotation of (<b>A</b>) turbot (<span class="html-italic">S. maximus</span>) and (<b>B</b>) brill (<span class="html-italic">S. rhombus</span>) skin transcriptomes.</p>
Full article ">Figure 2
<p>Top 10 most-significantly enriched gene ontology (GO) terms associated to biological process (BP), molecular function (MF), and cellular component (CC) in (<b>A</b>) turbot (<span class="html-italic">S. maximus</span>) and (<b>B</b>) brill (<span class="html-italic">S. rhombus</span>) skin transcriptomes.</p>
Full article ">Figure 3
<p>Significantly enriched gene ontology (GO) terms in the differentially expressed genes in the skin of (<b>A</b>) turbot (<span class="html-italic">S. maximus</span>) and (<b>B</b>) brill (<span class="html-italic">S. rhombus</span>) associated with biological processes (BPs), molecular functions (MFs), and cellular components (CCs).</p>
Full article ">Figure 4
<p>Significantly enriched gene ontology (GO) terms associated with biological processes (BPs) and molecular functions (MFs) that were represented in the differentially expressed genes in the skin of turbot and brill.</p>
Full article ">Figure 5
<p>Skin of turbot and brill before and 72 h after an injury. (<b>A</b>) Brill skin before injury; (<b>B</b>) brill skin 72 h after injury; (<b>C</b>) turbot skin before injury; (<b>D</b>) turbot skin 72 h after injury; (<b>E</b>) edema and spongiosis of the stratum spongiosum (arrow) and spongiosis of the epidermis (arrowhead) in turbot skin; (<b>F</b>) spongiosis and infiltration of inflammatory cells in the dermis of turbot skin (arrowhead); (<b>G</b>) hemorrhage (arrow), spongiosis, and infiltration of inflammatory cells in the hypodermis (arrowhead) of turbot skin; (<b>H</b>) infiltration of inflammatory cells in the dermis (arrowhead) of turbot skin. E—epidermis, skin structures; D—dermis; H—hypodermis; M—muscle; S—scale; P—scale pocket.</p>
Full article ">
27 pages, 27328 KiB  
Article
An Aerial Photogrammetry Benchmark Dataset for Point Cloud Segmentation and Style Translation
by Meida Chen, Kangle Han, Zifan Yu, Andrew Feng, Yu Hou, Suya You and Lucio Soibelman
Remote Sens. 2024, 16(22), 4240; https://doi.org/10.3390/rs16224240 - 14 Nov 2024
Viewed by 240
Abstract
The recent surge in diverse 3D datasets spanning various scales and applications marks a significant advancement in the field. However, the comprehensive process of data acquisition, refinement, and annotation at a large scale poses a formidable challenge, particularly for individual researchers and small [...] Read more.
The recent surge in diverse 3D datasets spanning various scales and applications marks a significant advancement in the field. However, the comprehensive process of data acquisition, refinement, and annotation at a large scale poses a formidable challenge, particularly for individual researchers and small teams. To this end, we present a novel synthetic 3D point cloud generation framework that can produce detailed outdoor aerial photogrammetric 3D datasets with accurate ground truth annotations without the labor-intensive and time-consuming data collection/annotation processes. Our pipeline procedurally generates synthetic environments, mirroring real-world data collection and 3D reconstruction processes. A key feature of our framework is its ability to replicate consistent quality, noise patterns, and diversity similar to real-world datasets. This is achieved by adopting UAV flight patterns that resemble those used in real-world data collection processes (e.g., the cross-hatch flight pattern) across various synthetic terrains that are procedurally generated, thereby ensuring data consistency akin to real-world scenarios. Moreover, the generated datasets are enriched with precise semantic and instance annotations, eliminating the need for manual labeling. Our approach has led to the development and release of the Semantic Terrain Points Labeling—Synthetic 3D (STPLS3D) benchmark, an extensive outdoor 3D dataset encompassing over 16 km2, featuring up to 19 semantic labels. We also collected, reconstructed, and annotated four real-world datasets for validation purposes. Extensive experiments on these datasets demonstrate our synthetic datasets’ effectiveness, superior quality, and their value as a benchmark dataset for further point cloud research. Full article
(This article belongs to the Special Issue Point Cloud Processing with Machine Learning)
Show Figures

Figure 1

Figure 1
<p>The proposed synthetic data generation pipeline.</p>
Full article ">Figure 2
<p>The class distribution of the <span class="html-italic">real dataset</span> of our STPLS3D. Note the logarithmic scale for the vertical axis.</p>
Full article ">Figure 3
<p>Additional examples of synthetic and real-world point clouds in our STPLS3D dataset.</p>
Full article ">Figure 4
<p>The class distribution of <span class="html-italic">synthetic</span> subsets of our STPLS3D. Note the logarithmic scale for the vertical axis. Please refer to the appendix for the detailed definition of the semantic categories in this dataset.</p>
Full article ">Figure 5
<p>Qualitative comparison of tree crowns generated by ray-casted, synthetic photogrammetry, and real photogrammetry.</p>
Full article ">Figure 6
<p>Example visualization of the FDc dataset.</p>
Full article ">Figure 7
<p>Comparison of real image and point cloud and synthetic data and style transfer result.</p>
Full article ">
25 pages, 4283 KiB  
Article
Shape-Aware Adversarial Learning for Scribble-Supervised Medical Image Segmentation with a MaskMix Siamese Network: A Case Study of Cardiac MRI Segmentation
by Chen Li, Zhong Zheng and Di Wu
Bioengineering 2024, 11(11), 1146; https://doi.org/10.3390/bioengineering11111146 - 13 Nov 2024
Viewed by 292
Abstract
The transition in medical image segmentation from fine-grained to coarse-grained annotation methods, notably scribble annotation, offers a practical and efficient preparation for deep learning applications. However, these methods often compromise segmentation precision and result in irregular contours. This study targets the enhancement of [...] Read more.
The transition in medical image segmentation from fine-grained to coarse-grained annotation methods, notably scribble annotation, offers a practical and efficient preparation for deep learning applications. However, these methods often compromise segmentation precision and result in irregular contours. This study targets the enhancement of scribble-supervised segmentation to match the accuracy of fine-grained annotation. Capitalizing on the consistency of target shapes across unpaired datasets, this study introduces a shape-aware scribble-supervised learning framework (MaskMixAdv) addressing two critical tasks: (1) Pseudo label generation, where a mixup-based masking strategy enables image-level and feature-level data augmentation to enrich coarse-grained scribbles annotations. A dual-branch siamese network is proposed to generate fine-grained pseudo labels. (2) Pseudo label optimization, where a CNN-based discriminator is proposed to refine pseudo label contours by distinguishing them from external unpaired masks during model fine-tuning. MaskMixAdv works under constrained annotation conditions as a label-efficient learning approach for medical image segmentation. A case study on public cardiac MRI datasets demonstrated that the proposed MaskMixAdv outperformed the state-of-the-art methods and narrowed the performance gap between scribble-supervised and mask-supervised segmentation. This innovation cuts annotation time by at least 95%, with only a minor impact on Dice performance, specifically a 2.6% reduction. The experimental outcomes indicate that employing efficient and cost-effective scribble annotation can achieve high segmentation accuracy, significantly reducing the typical requirement for fine-grained annotations. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>An overview of medical image segmentation under different three kinds of supervisions: (<b>a</b>) mask-supervised segmentation based on paired images and pixel-labelled masks, (<b>b</b>) scribble-supervised segmentation based on paired images and coarse-grained scribbles annotations, and (<b>c</b>) adversarial scribble-supervised segmentation based on paired images, coarse-grained scribbles, and additional unpaired masks. Cases from two cardiac MRI datasets (ACDC and MSCMR) are shown to give a conceptual comparison. As can be observed, since the regions of interest (ROI) among ACDC and MSCMR datasets are shared, it is reasonable to transfer shape prior across the two datasets.</p>
Full article ">Figure 2
<p>The backbone of the proposed MaskMixAdv framework is a dual-branch siamese network (DBSN), and a CNN-based discriminator (<math display="inline"><semantics> <msub> <mi>θ</mi> <mrow> <mi>d</mi> <mi>i</mi> <mi>s</mi> </mrow> </msub> </semantics></math>) is built on top of the backbone for adversarial learning. MaskMixAdv consists of two phases, where the first phase (MaskMix) performs data augmentation and scribble-supervised learning, the second phase (Adv) achieves adversarial learning. Note that some connecting lines of the loss function in the figure are omitted for better observation.</p>
Full article ">Figure 3
<p>The architecture of the proposed siamese network DBSN.</p>
Full article ">Figure 4
<p>Illustration of Mask phase in the proposed MaskMixAdv framework and other methods for data augmentation. For the Mixup-based approaches, this figure introduces white outlines to easily distinguish the multi-sample mixing process. Only perturbations at the image level are shown here, and perturbations at the feature level are similar and thus omitted. Note the scribbles shown here are bolded for ease of viewing. Better zoom in for more details.</p>
Full article ">Figure 5
<p>Visualization of the results of the proposed MaskMixAdv and other methods for cardiac MRI segmentation on ACDC dataset. Note that the scribbles shown are bolded for ease of viewing.</p>
Full article ">Figure 6
<p>Comparison with existing scribble-supervised segmentation methods with and without external masks on the ACDC dataset. ↑ and ↓ denote the metrics improved and reduced after incorporating external masks, respectively. * The performance of WSL4MIS implemented by this study is evaluated.</p>
Full article ">Figure 7
<p>Results of the semi-supervised learning experiments under different label fractions. The gray, orange, and red blocks indicated the performance (HD95 and Dice) of right ventricle (RV), myocardium(Myo), and left ventricle (LV) by MaskMixAdv, respectively. In addition, statistical significance analysis is conducted on a case-by-case basis between the 100% labelled results and the results labelled from 10% to 90%, whose <span class="html-italic">p</span>-values are reported as * or n.s.</p>
Full article ">Figure 8
<p>Box illustration of the performance of MaskMixAdv with different number of masks from the additional unpaired dataset. From (<b>a</b>–<b>d</b>), the results presented RV, Myo, LV, and the average value in order. The first row reported the 3D Dice results, while the second row reported the Hausdorff Distance. Note that the white circles denoted the mean values. The dotted red lines indicated the performance of proposed MaskMix, which was trained without <math display="inline"><semantics> <msub> <mi mathvariant="script">L</mi> <mrow> <mi>a</mi> <mi>d</mi> <mi>v</mi> </mrow> </msub> </semantics></math>, i.e., the number of external masks cases was 0.</p>
Full article ">Figure 9
<p>Comparison between different annotations, including fine-grained masks, coarse-grained scribbles, and coarse-grained points. The points shown here are bolded for ease of viewing. Note that the above three contain supervisory information in descending order.</p>
Full article ">
17 pages, 47728 KiB  
Article
Accurate Feature Extraction from Historical Geologic Maps Using Open-Set Segmentation and Detection
by Aaron Saxton, Jiahua Dong, Albert Bode, Nattapon Jaroenchai, Rob Kooper, Xiyue Zhu, Dou Hoon Kwark, William Kramer, Volodymyr Kindratenko and Shirui Luo
Geosciences 2024, 14(11), 305; https://doi.org/10.3390/geosciences14110305 - 13 Nov 2024
Viewed by 238
Abstract
This study presents a novel AI method for extracting polygon and point features from historical geologic maps, representing a pivotal step for assessing the mineral resources needed for energy transition. Our innovative method involves using map units in the legends as prompts for [...] Read more.
This study presents a novel AI method for extracting polygon and point features from historical geologic maps, representing a pivotal step for assessing the mineral resources needed for energy transition. Our innovative method involves using map units in the legends as prompts for one-shot segmentation and detection in geological feature extraction. The model, integrated with a human-in-the-loop system, enables geologists to refine results efficiently, combining the power of AI with expert oversight. Tested on geologic maps annotated by USGS and DARPA for the AI4CMA DARPA Challenge, our approach achieved a median F1 score of 0.91 for polygon feature segmentation and 0.73 for point feature detection when such features had abundant annotated data, outperforming current benchmarks. By efficiently and accurately digitizing historical geologic map, our method promises to provide crucial insights for responsible policymaking and effective resource management in the global energy transition. Full article
Show Figures

Figure 1

Figure 1
<p>Example of data visualization: This figure illustrates a sample dataset embedded within a comprehensive map. It includes the following components: 1. Main Map Content: Displays the area containing key features of interest. 2. Corner Coordinate: Typically located at the corner of the map content for georeferencing purposes. 3. Text Information: Provides metadata such as map location and geological age. 4. Map Legend Area: Contains a list of map units along with their descriptive text. 5. Segmentation Map: Shows an example of extracted polygon features using the map unit “Qal” as the query key.</p>
Full article ">Figure 2
<p>(<b>a</b>) A geologic map sample with the map content area and legend area highlighted in red. The original map is overlaid with polygonal features to emphasize the discontinuity and the varying shapes and sizes of these features. (<b>b</b>) An illustration of the patch-wise segmentation model using map unit as the prompt. (<b>c</b>) Congregated results after the patch-wise segmentation model inference and restitching.</p>
Full article ">Figure 3
<p>(<b>a</b>) The uppermost plot depicts a geologic map featuring a legend with six symbol items, which are displayed as a red box in the upper-middle region; these symbols are almost indistinguishable when lumped together. The accompanying JSON file on the right-hand side documents the names and coordinates of each legend item. The bottom section showcases two additional maps with legends marked in red boxes. (<b>b</b>) The inconsistent symbology of legend items among maps in the training, validation, and testing dataset.</p>
Full article ">Figure 4
<p>Flowchart illustrates the entire steps of the processing flow.</p>
Full article ">Figure 5
<p>Model performance on legend map unit extraction. (<b>a</b>) Visualization of the extracted map unit on patch images. (<b>b</b>) Precision–Recall curve to illustrate the trade-off between precision and recall for different thresholds.</p>
Full article ">Figure 6
<p>(<b>a</b>–<b>c</b>) Model performance on example patched image. This visualization includes patch image, legend, predicted segmentation mask, and ground truth (GT) segmentation mask.</p>
Full article ">Figure 7
<p>Model performance on polygon feature extraction after aggregating all polygon and point features across the entire map. (<b>a</b>) Visualization of the raw map; (<b>b</b>) visualization of the extracted features; (<b>c</b>,<b>d</b>) zoom-in plot for better visualization. Different colors represents different point features.</p>
Full article ">Figure 8
<p>(<b>a</b>,<b>b</b>) The model’s performance in validation data for various types of legend items. The columns from left to right are (1) patchified image, (2) resized legend item, (3) model predicted annotation (red circle) (4) ground truth annotation (blue circle).</p>
Full article ">Figure 9
<p>Model performance on an entire map; red circle represents the model prediction, and blue circle represents the ground truth. (<b>a</b>) Model performance for predicting symbol ’3_pt’, (<b>b</b>) zoom-in plot for better visualization.</p>
Full article ">
26 pages, 3672 KiB  
Article
Development of a Cost-Efficient and Glaucoma-Specialized OD/OC Segmentation Model for Varying Clinical Scenarios
by Kai Liu and Jicong Zhang
Sensors 2024, 24(22), 7255; https://doi.org/10.3390/s24227255 - 13 Nov 2024
Viewed by 166
Abstract
Most existing optic disc (OD) and cup (OC) segmentation models are biased to the dominant size and easy class (normal class), resulting in suboptimal performances on glaucoma-confirmed samples. Thus, these models are not optimal choices for assisting in tracking glaucoma progression and prognosis. [...] Read more.
Most existing optic disc (OD) and cup (OC) segmentation models are biased to the dominant size and easy class (normal class), resulting in suboptimal performances on glaucoma-confirmed samples. Thus, these models are not optimal choices for assisting in tracking glaucoma progression and prognosis. Moreover, fully supervised models employing annotated glaucoma samples can achieve superior performances, although restricted by the high cost of collecting and annotating the glaucoma samples. Therefore, in this paper, we are dedicated to developing a glaucoma-specialized model by exploiting low-cost annotated normal fundus images, simultaneously adapting various common scenarios in clinical practice. We employ a contrastive learning and domain adaptation-based model by exploiting shared knowledge from normal samples. To capture glaucoma-related features, we utilize a Gram matrix to encode style information and the domain adaptation strategy to encode domain information, followed by narrowing the style and domain gaps between normal and glaucoma samples by contrastive and adversarial learning, respectively. To validate the efficacy of our proposed model, we conducted experiments utilizing two public datasets to mimic various common scenarios. The results demonstrate the superior performance of our proposed model across multi-scenarios, showcasing its proficiency in both the segmentation- and glaucoma-related metrics. In summary, our study illustrates a concerted effort to target confirmed glaucoma samples, mitigating the inherent bias issue in most existing models. Moreover, we propose an annotation-efficient strategy that exploits low-cost, normal-labeled fundus samples, mitigating the economic- and labor-related burdens by employing a fully supervised strategy. Simultaneously, our approach demonstrates its adaptability across various scenarios, highlighting its potential utility in both assisting in the monitoring of glaucoma progression and assessing glaucoma prognosis. Full article
(This article belongs to the Special Issue Vision- and Image-Based Biomedical Diagnostics—2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Illustrations of our motivation and the promising performance of our proposed model.</p>
Full article ">Figure 2
<p>The proposed model adapts three common situations (Scenario 1: only normal images have pixel-level annotations; Scenario 2: only glaucoma images have pixel-level annotations; Scenario 3: both normal and glaucoma images have pixel-level annotations). The pixel-level annotated normal fundus images (and pixel-level annotated glaucoma fundus images, if available) are utilized to capture general features with the pixel-level supervised annotations. The glaucoma fundus images (without pixel-level annotations) are utilized to capture glaucoma-related features with the style-level and domain-level supervised annotations. The proposed model encompasses a pixel-level supervised path that aims to generate pixel-level prediction results by soft dice loss; the style-level supervised path is designed to encourage the generation of pixel-level prediction results similar to glaucoma-style features by narrowing style gaps; and the domain-level supervised path encourages the generation of pixel-level prediction results close to the glaucoma-domain at various domain levels by narrowing domain gaps. For detailed frameworks corresponding to each scenario, please refer to three separate images <a href="#app1-sensors-24-07255" class="html-app">(Figures S2–S4) in the Supplementary Materials</a>.</p>
Full article ">Figure 3
<p>Visual comparison of segmentation results from the various models on the glaucoma samples from the ORIGA dataset. The upper three examples are common samples, while the lower three examples present challenging samples. The last method, denoted as “Proposed+G”, encompasses the proposed style and domain transfer model with annotated glaucoma and normal fundus images in Scenario 2.</p>
Full article ">Figure 4
<p>Visual comparison of the segmentation results from the various models on glaucoma samples from the G1020 dataset. The upper three examples are common samples, while the lower three examples represent challenging samples.</p>
Full article ">Figure 5
<p>The distribution of the encoding (Bottom and Up1) and output features obtained from the various models, along with the corresponding ground truths of the normal and glaucoma classes. The three figures in the upper row are from the ORIGA dataset, while the lower three subfigures depict the results from the G1020 dataset.</p>
Full article ">Figure 6
<p>The plots of the style gaps between the results generated by the various models and corresponding ground truths during the training stage. The left figure illustrates the results from the ORIGA dataset, while the right figure corresponds to the G1020 Dataset. The N-pixel-level is the direct supervision with normal fundus images with pixel-level annotation; style-level is the proposed model with style module and style-level annotations; domain-level is the proposed model with the domain module and domain-level annotations; and G-pixel-level represents the glaucoma samples with pixel-level annotations.</p>
Full article ">
21 pages, 12428 KiB  
Article
Gaze Zone Classification for Driving Studies Using YOLOv8 Image Classification
by Frouke Hermens, Wim Anker and Charmaine Noten
Sensors 2024, 24(22), 7254; https://doi.org/10.3390/s24227254 - 13 Nov 2024
Viewed by 278
Abstract
Gaze zone detection involves estimating where drivers look in terms of broad categories (e.g., left mirror, speedometer, rear mirror). We here specifically focus on the automatic annotation of gaze zones in the context of road safety research, where the system can be tuned [...] Read more.
Gaze zone detection involves estimating where drivers look in terms of broad categories (e.g., left mirror, speedometer, rear mirror). We here specifically focus on the automatic annotation of gaze zones in the context of road safety research, where the system can be tuned to specific drivers and driving conditions, so that an easy to use but accurate system may be obtained. We show with an existing dataset of eye region crops (nine gaze zones) and two newly collected datasets (12 and 10 gaze zones) that image classification with YOLOv8, which has a simple command line interface, achieves near-perfect accuracy without any pre-processing of the images, as long as a model is trained on the driver and conditions for which annotation is required (such as whether the drivers wear glasses or sunglasses). We also present two apps to collect the training images and to train and apply the YOLOv8 models. Future research will need to explore how well the method extends to real driving conditions, which may be more variable and more difficult to annotate for ground truth labels. Full article
Show Figures

Figure 1

Figure 1
<p>Four images from one of the five drivers in the Lisa2 dataset [<a href="#B36-sensors-24-07254" class="html-bibr">36</a>].</p>
Full article ">Figure 2
<p>Photograph of the setup. Two webcams were attached to a laptop controlling data collection and placed on the driver seat. Little round stickers in different colours served to help the participant to fixate on different gaze zones. The position of the sticker for the right window is indicated. Other stickers inside this image are for the speedometer, the centre console, and the right mirror.</p>
Full article ">Figure 3
<p>Examples of images of looking and pointing in a different context. A total of 10 different targets were selected around the screen that the webcam was attached to and other parts of the room. Note that in between recording sessions the actor changed the blue jacket for a red jacket.</p>
Full article ">Figure 4
<p>Accuracy per model trained on individual drivers for the Lisa2 dataset without glasses. Accuracy is defined as the percentage of predictions that agree with the annotated label (also known as the ’top1’ accuracy).</p>
Full article ">Figure 5
<p>Confusion matrices for each combination of the driver during training and the driver used for the test images, based on the validation sets.</p>
Full article ">Figure 6
<p>Accuracy per driver on models trained on different numbers of drivers for the Lisa2 dataset without glasses.</p>
Full article ">Figure 7
<p>Four images from one of the five drivers in the Lisa2 dataset, now with glasses.</p>
Full article ">Figure 8
<p>(<b>a</b>) Accuracy per driver on images with glasses when trained on images without glasses or images with glasses. (<b>b</b>) Accuracy per driver on images with and without glasses when trained on images with and without glasses. Images are from the Lisa2 dataset.</p>
Full article ">Figure 9
<p>Examples of images of the male driver, with and without glasses, recorded with our own app.</p>
Full article ">Figure 10
<p>(<b>a</b>) Zone classification accuracy for the male and female driver for smaller (320 × 240) and larger (640 × 480) images (both without sunglasses). Each model was trained on that particular combination of driver and image size and then applied to the validation set (seen during training) and test set (not seen during training). (<b>b</b>) Accuracy per driver on a model trained with the same driver on a model trained with the other driver or a model trained on both drivers. Performance is computed across the training, validation, and test sets. (<b>c</b>) Accuracy for the male driver with or without sunglasses on a model trained with or without sunglasses or images with and without sunglasses (’Both’). Performance is computed across the training, validation, and test sets.</p>
Full article ">Figure 11
<p>Zone classification accuracy for when an actor was looking or pointing at objects inside a living room. In between recordings, the actor changed from a red to a blue jacket, or vice versa. The change of the jacket reduced accuracy by around 5% (pointing) to 10% (looking) if these images were not included during training (’both’ refers to when both red and blue jacket training images were included).</p>
Full article ">Figure 12
<p>Screenshots from the first app that can be used to instruct participants to look at particular gaze zones and to collect images from the webcam, to extract frames, and structure the images into the folders for image classification. Note that a section of the window is shown in both images for better visibility.</p>
Full article ">Figure 12 Cont.
<p>Screenshots from the first app that can be used to instruct participants to look at particular gaze zones and to collect images from the webcam, to extract frames, and structure the images into the folders for image classification. Note that a section of the window is shown in both images for better visibility.</p>
Full article ">Figure 13
<p>Screenshots from the second app that can be used to train the models and to generate the required file structure and annotations for object detection. Note that we did not use the object detection functionality in the present tests, because it is computationally more expensive and the image classification reached a near-perfect performance. Each image shows a section of the original screen for better visibility.</p>
Full article ">Figure 13 Cont.
<p>Screenshots from the second app that can be used to train the models and to generate the required file structure and annotations for object detection. Note that we did not use the object detection functionality in the present tests, because it is computationally more expensive and the image classification reached a near-perfect performance. Each image shows a section of the original screen for better visibility.</p>
Full article ">
12 pages, 2598 KiB  
Article
Single-Nucleus RNA Sequencing Reveals the Transcriptome Profiling of Ovarian Cells in Adolescent Cyprinus carpio
by Mingxi Hou, Jin Zhang, Qi Wang, Ran Zhao, Yiming Cao, Yingjie Chen, Kaikuo Wang, Ning Ding, Yingjie Qi, Xiaoqing Sun, Yan Zhang and Jiongtang Li
Animals 2024, 14(22), 3263; https://doi.org/10.3390/ani14223263 - 13 Nov 2024
Viewed by 220
Abstract
The common carp (Cyprinus carpio) is a crucial freshwater species cultivated worldwide for food consumption. Female carp have better growth performance than males, which fascinates scholars to uncover the mechanism of gonadal differentiation and produce mono-sex populations. However, the mechanism of [...] Read more.
The common carp (Cyprinus carpio) is a crucial freshwater species cultivated worldwide for food consumption. Female carp have better growth performance than males, which fascinates scholars to uncover the mechanism of gonadal differentiation and produce mono-sex populations. However, the mechanism of ovarian development at single-cell resolution is limited. Here, we conducted single-nucleus RNA sequencing in adolescent common carp ovaries. Our study obtained transcriptional profiles of 13,155 nuclei and revealed 13 distinct cell clusters in the ovaries, including three subtypes of germ cells and four subtypes of granulosa cells. We subsequently performed pseudotime trajectory analysis to delineate potential mechanisms underlying the development of germ cells and granulosa cells. We identified 1250 dynamic expression genes in germ cells and 1815 in granulosa cells (q-value < 0.01), including zp3, eif4a2 and aspm in germ cells and fshr and esr1 in granulosa cells. The functional annotation showed that dynamic expression genes in germ cells were involved in sperm–egg recognition and some terms related to meiosis, such as sister chromatid segregation and homologous recombination. Genes expressed dynamically in granulosa cells were related to the TGF-β signaling pathway, response to gonadotropin, and development of primary female sexual characteristics. In addition, the dynamic genes expressed in granulosa cells might relate to the complex communication between different cell types. In summary, our study provided a transcriptome profile of common carp ovaries at single-nucleus resolution, and we further revealed the potential cell type-specific mechanisms underlying oogenesis and the differentiation of granulosa cells, which will facilitate breeding all-female common carp populations. Full article
(This article belongs to the Special Issue Genetics, Breeding, and Farming of Aquatic Animals)
Show Figures

Figure 1

Figure 1
<p>Cell types in the common carp ovary. (<b>A</b>) Observation of histological features of major cell types in the ovary using H&amp;E staining. Scale bar = 200 µm (top), 50 µm (bottom); red arrows, oocyte; red arrowheads, oogonia; black arrowheads, granulosa cell. (<b>B</b>) Visualization of snRNA-seq data in t-SNE to reveal distinct cell clusters.</p>
Full article ">Figure 2
<p>The pseudotime trajectory analyses. (<b>A</b>) Differentiation trajectories of germ cells. (<b>B</b>) Differentiation trajectories of granulosa cells. The numbers inside the black circles represent the branch points of different cell states; different colors in the top two figures represent different cell types; the colors from dark to light in the middle two figures represent the degree of differentiation; different colors in the bottom two figures represent different cell states.</p>
Full article ">Figure 3
<p>Functional annotation of dynamic expression genes in germ cells. (<b>A</b>) The bar plot of GO enrichment analysis of dynamic expression genes in germ cells. The number on the bar means gene numbers enriched in this term. BP, biological processes; CC, cell component; MF, molecular function. (<b>B</b>) The bubble diagram of KEGG enrichment analysis of dynamic expression genes in germ cells. The greater the enrichment factor, the more reliable the significance of differential gene enrichment in this pathway.</p>
Full article ">Figure 4
<p>Functional annotation of dynamic expression genes in granulosa cells. (<b>A</b>) The bar plot of GO enrichment analysis of dynamic expression genes in granulosa cells. The number on the bar means gene numbers enriched in this term. BP, biological processes; CC, cell component; MF, molecular function. (<b>B</b>) The bubble diagram of KEGG enrichment analysis of dynamic expression genes in granulosa cells. The greater the enrichment factor, the more reliable the significance of differential gene enrichment in this pathway.</p>
Full article ">
22 pages, 5474 KiB  
Article
Comparative Transcriptome Analysis of Sexual Differentiation in Male and Female Gonads of Nao-Zhou Stock Large Yellow Croaker (Larimichthys crocea)
by Haojie Wang, Zirui Wen, Eric Amenyogbe, Jinghui Jin, Yi Lu, Zhongliang Wang and Jiansheng Huang
Animals 2024, 14(22), 3261; https://doi.org/10.3390/ani14223261 - 13 Nov 2024
Viewed by 271
Abstract
The Nao-zhou stock large yellow croaker (Larimichthys crocea) is a unique economic seawater fish species in China and exhibits significant dimorphism in both male and female phenotypes. Cultivating all-female seedlings can significantly improve breeding efficiency. To accelerate the cultivation process of [...] Read more.
The Nao-zhou stock large yellow croaker (Larimichthys crocea) is a unique economic seawater fish species in China and exhibits significant dimorphism in both male and female phenotypes. Cultivating all-female seedlings can significantly improve breeding efficiency. To accelerate the cultivation process of all female seedlings of this species, it is necessary to deeply understand the regulatory mechanisms of sexual differentiation and gonadal development. This study used Illumina high-throughput sequencing to sequence the transcriptome of the testes and ovaries of Nao-zhou stock large yellow croaker to identify genes and molecular functions related to sex determination. A total of 10,536 differentially expressed genes were identified between males and females, including 5682 upregulated and 4854 downregulated genes. Functional annotation screened out 70 important candidate genes related to sex, including 34 genes highly expressed in the testis (including dmrt1, foxm1, and amh) and 36 genes highly expressed in the ovary (including gdf9, hsd3b1, and sox19b). Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analysis found that differentially expressed genes were significantly enriched in nine signaling pathways related to sex determination and gonadal development, including steroid hormone biosynthesis, MAPK signaling pathway, and the TGF-beta signaling pathway. By screening sex-related differentially expressed genes and mapping protein–protein interaction networks, hub genes such as dmrt1, amh, and cyp19a1a were found to be highly connected. The expression levels of 15 sex-related genes, including amh, dmrt1, dmrt2a, foxl1, and zp3b, were determined by qRT–PCR and RNA sequencing. This study screened for differentially expressed genes related to sex determination and differentiation of Nao-zhou stock large yellow croaker and revealed the signaling pathways involved in gonad development of male and female individuals. The results provide important data for future research on sex determination and differentiation mechanisms, thereby providing a scientific basis for the cultivation of all-female seedlings. Full article
(This article belongs to the Section Animal Physiology)
Show Figures

Figure 1

Figure 1
<p>Histological characteristics of testes (<b>a</b>) and ovaries (<b>b</b>) of Nao-zhou stock large yellow croaker. Notes: Sp: sperm; Spe: sperm cell; Sl: sperm lobule; Yg: yolk granule; N: nucleus; Yv: yolk vesicle; Nu: Nucleolus.</p>
Full article ">Figure 2
<p>Volcano map of differentially expressed genes in Nao-zhou stock large yellow croaker. Note: The horizontal axis shows the log<sub>2</sub> value (fold change), the vertical axis is the −log<sub>10</sub> value (<span class="html-italic">p</span> value), green dots represent upregulated genes, red dots represent downregulated genes, and blue dots represent genes with no significance. The dotted lines represent the threshold of log<sub>2</sub>(FC) values.</p>
Full article ">Figure 3
<p>Violin plot and cluster heat map of 6 samples. Note: (<b>a</b>) represents the correlation of samples between and within groups. (<b>b</b>) shows cluster results of DEGs. The color indicates the expression amount (logarithm) or the difference multiple (logarithm). The redder color indicates that the gene expression level is higher or the difference factor is larger, and the blue color indicates the opposite.</p>
Full article ">Figure 4
<p>Top 30 GO enrichment pathways of differentially expressed genes in the gonads of Nao-zhou stock large yellow croaker. Note: The horizontal axis shows the gene name, and the vertical axis shows the gene ratio.</p>
Full article ">Figure 5
<p>Top 30 KEGG enrichment pathways of differentially expressed genes in the gonads of Nao-zhou stock large yellow croaker. Note: The horizontal axis shows the gene name, and the vertical axis shows the gene ratio.</p>
Full article ">Figure 6
<p>GO (<b>a</b>) and KEGG (<b>b</b>) enriched pathways of the top 20 differentially expressed genes associated with sex in Nao-zhou stock large yellow croaker.</p>
Full article ">Figure 7
<p>Protein-protein interaction (PPI) network diagram of DEGs in female and male Nao-zhou stock large yellow croaker. Note: Different background colors represent the network degree values of proteins. The inner circle of the PPI network shows hub genes, while the outer two circles are non-hub genes. Number of gene nodes is represented by color depth.</p>
Full article ">Figure 8
<p>Relative expression levels of 15 genes in the testis and ovary of Nao-zhou stock large yellow croaker. Note: Data are presented as mean ± S.E.M. (n = 3). The asterisks indicate that the differences between the mean values are statistically significant between gonads. *: 0.01 &lt; <span class="html-italic">p</span> &lt; 0.05; **: 0.001 &lt; <span class="html-italic">p</span> &lt; 0.01; ***: <span class="html-italic">p</span> &lt; 0.001.</p>
Full article ">Figure 9
<p>qRT-PCR verification of sex-related differentially expressed genes. Note: The horizontal axis shows the gene name, and the vertical axis shows the relative expression level.</p>
Full article ">Figure 10
<p>Chord diagram of the functional classification of twelve candidate genes. Note: The left half represents candidate genes and expression levels, and the right half represents GO enriched pathways related to reproduction.</p>
Full article ">
18 pages, 4920 KiB  
Article
Dual-Attention Multiple Instance Learning Framework for Pathology Whole-Slide Image Classification
by Dehua Liu, Chengming Li, Xiping Hu and Bin Hu
Electronics 2024, 13(22), 4445; https://doi.org/10.3390/electronics13224445 - 13 Nov 2024
Viewed by 291
Abstract
Conventional methods for tumor diagnosis suffer from two inherent limitations: they are time-consuming and subjective. Computer-aided diagnosis (CAD) is an important approach for addressing these limitations. Pathology whole-slide images (WSIs) are high-resolution tissue images that have made significant contributions to cancer diagnosis and [...] Read more.
Conventional methods for tumor diagnosis suffer from two inherent limitations: they are time-consuming and subjective. Computer-aided diagnosis (CAD) is an important approach for addressing these limitations. Pathology whole-slide images (WSIs) are high-resolution tissue images that have made significant contributions to cancer diagnosis and prognosis assessment. Due to the complexity of WSIs and the availability of only slide-level labels, multiple instance learning (MIL) has become the primary framework for WSI classification. However, most MIL methods fail to capture the interdependence among image patches within a WSI, which is crucial for accurate classification prediction. Moreover, due to the weak supervision of slide-level labels, overfitting may occur during the training process. To address these issues, this paper proposes a dual-attention-based multiple instance learning framework (DAMIL). DAMIL leverages the spatial relationships and channel information between WSI patches for classification prediction, without detailed pixel-level tumor annotations. The output of the model preserves the semantic variations in the latent space, enhances semantic disturbance invariance, and provides reliable class identification for the final slide-level representation. We validate the effectiveness of DAMIL on the most commonly used public dataset, Camelyon16. The results demonstrate that DAMIL outperforms the state-of-the-art methods in terms of classification accuracy (ACC), area under the curve (AUC), and F1-Score. Our model also allows for the examination of its interpretability by visualizing the dual-attention weights. To the best of our knowledge, this is the first attempt to use a dual-attention mechanism, considering both spatial and channel information, for whole-slide image classification. Full article
Show Figures

Figure 1

Figure 1
<p>Overview of the proposed DAMIL. The WSI is first cropped into a number of patches and then feature extraction is performed with the pre-trained Resnet18. The generated feature vector matrix is passed sequentially through the encoder, channel attention module, spatial attention module, decoder, pooling layer, and fully connected layer to generate the finally prediction.</p>
Full article ">Figure 2
<p>Illustration of the difference between the attention-based conventional MIL model and the proposed dual-attention MIL model.</p>
Full article ">Figure 3
<p>Graphical representation of the transformation from a 2D feature map to a 1D feature map.</p>
Full article ">Figure 4
<p>Illustration of each attention submodule. As depicted in the diagram, (<b>a</b>) illustrates the channel attention module, while (<b>b</b>) illustrates the spatial attention module. Both attention modules utilize max-pooling and average pooling for their outputs. Channel attention compresses the dimension of instance quantity for pooling operations, while spatial attention compresses the channel dimension for pooling operations.</p>
Full article ">Figure 5
<p>Visualization of the clustering of package representations generated by the model using T-SNE. From left to right, the clustering results for ABMIL [<a href="#B29-electronics-13-04445" class="html-bibr">29</a>], DSMIL [<a href="#B18-electronics-13-04445" class="html-bibr">18</a>], and DAMIL.</p>
Full article ">Figure 6
<p>Interpretable heatmap of a WSI. The initial column displays pixel-level annotations of lymph node metastasis in a WSI, while the subsequent columns showcase the interpretable heatmaps corresponding to the red-boxed regions of the WSI acquired via ABMIL [<a href="#B29-electronics-13-04445" class="html-bibr">29</a>], DSMIL [<a href="#B18-electronics-13-04445" class="html-bibr">18</a>], and DAMIL, respectively.</p>
Full article ">
9 pages, 14661 KiB  
Communication
Identification of Goat Supernumerary Teat Phenotype Using Wide-Genomic Copy Number Variants
by Lu Xu, Weiyi Zhang, Haoyuan Zhang, Xiuqin Yang, Simone Ceccobelli, Yongju Zhao and Guangxin E
Animals 2024, 14(22), 3252; https://doi.org/10.3390/ani14223252 - 13 Nov 2024
Viewed by 220
Abstract
Supernumerary teats (SNTs) or nipples often emerge around the mammary line. This study performed a genome-wide selective sweep analysis (GWS) at the copy number variant (CNV) level using two selected signal calculation methods (VST and FST) to identify candidate [...] Read more.
Supernumerary teats (SNTs) or nipples often emerge around the mammary line. This study performed a genome-wide selective sweep analysis (GWS) at the copy number variant (CNV) level using two selected signal calculation methods (VST and FST) to identify candidate genes associated with SNTs in goats. A total of 12,310 CNVs were identified from 37 animals and 123 CNVs, with the top 1% VST values including 84 candidate genes (CDGs). Of these CDGs, minichromosome maintenance complex component 3, ectodysplasin A receptor associated via death domain, and cullin 5 demonstrated functions closely related to mammary gland development. In addition, 123 CNVs with the top 1% FST values were annotated to 97 CDGs. 5-Hydroxytryptamine receptor 2A, CCAAT/enhancer-binding protein alpha, and the polymeric immunoglobulin receptor affect colostrum secretion through multiple signaling pathways. Two genes, namely, RNA-binding motif protein 46 and β-1,3-galactosyltransferase 5, showed a close relation to mammary gland development. Six CNVs were identified and annotated to five genes by intersecting the top 1% of candidate CNVs with both parameters. These genes include LOC102185621, LOC102190481, and UDP-glucose pyrophosphorylase 2, which potentially affect the occurrence of BC through multiple biological processes, such as cell detoxification, glycogen synthesis, and phospholipid metabolism. In conclusion, we discovered numerous genes related to mammary development and breast cancer (BC) through a GWS, which suggests the mechanism of SNTs in goats and a certain association between mammary cancer and SNTs. Full article
(This article belongs to the Section Animal Genetics and Genomics)
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>) A Manhattan map of the wide-genomic sweep analysis of the goat supernumerary teat phenotype using <span class="html-italic">V</span><sub>ST</sub>. (<b>B</b>) The top 20 KEGG pathways enriched by candidate genes from CNVs with the top 1% <span class="html-italic">V</span><sub>ST</sub> values. (<b>C</b>) A Manhattan map of the wide-genomic sweep analysis of the goat supernumerary teat phenotype using <span class="html-italic">F</span><sub>ST</sub>. (<b>D</b>) The 14 KEGG pathways significantly enriched by the candidate genes from CNVs with the top 1% <span class="html-italic">F</span><sub>ST</sub> values.</p>
Full article ">Figure 2
<p>(<b>A</b>) Intersection of top 1% CNVs between <span class="html-italic">V</span><sub>ST</sub> and <span class="html-italic">F</span><sub>ST</sub>. (<b>B</b>) Intersection map of <span class="html-italic">V</span><sub>ST</sub> and <span class="html-italic">F</span><sub>ST</sub> in terms of top 1% CNV annotated genes.</p>
Full article ">
Back to TopTop