Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,058)

Search Parameters:
Keywords = label transfer

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 3780 KiB  
Review
Artificial Intelligence-Assisted Stimulated Raman Histology: New Frontiers in Vibrational Tissue Imaging
by Manu Krishnan Krishnan Nambudiri, V. G. Sujadevi, Prabaharan Poornachandran, C. Murali Krishna, Takahiro Kanno and Hemanth Noothalapati
Cancers 2024, 16(23), 3917; https://doi.org/10.3390/cancers16233917 - 22 Nov 2024
Viewed by 238
Abstract
Frozen section biopsy, introduced in the early 1900s, still remains the gold standard methodology for rapid histologic evaluations. Although a valuable tool, it is labor-, time-, and cost-intensive. Other challenges include visual and diagnostic variability, which may complicate interpretation and potentially compromise the [...] Read more.
Frozen section biopsy, introduced in the early 1900s, still remains the gold standard methodology for rapid histologic evaluations. Although a valuable tool, it is labor-, time-, and cost-intensive. Other challenges include visual and diagnostic variability, which may complicate interpretation and potentially compromise the quality of clinical decisions. Raman spectroscopy, with its high specificity and non-invasive nature, can be an effective tool for dependable and quick histopathology. The most promising modality in this context is stimulated Raman histology (SRH), a label-free, non-linear optical process which generates conventional H&E-like images in short time frames. SRH overcomes limitations of conventional Raman scattering by leveraging the qualities of stimulated Raman scattering (SRS), wherein the energy gets transferred from a high-power pump beam to a probe beam, resulting in high-energy, high-intensity scattering. SRH’s high resolution and non-requirement of preprocessing steps make it particularly suitable when it comes to intrasurgical histology. Combining SRH with artificial intelligence (AI) can lead to greater precision and less reliance on manual interpretation, potentially easing the burden of the overburdened global histopathology workforce. We review the recent applications and advances in SRH and how it is tapping into AI to evolve as a revolutionary tool for rapid histologic analysis. Full article
(This article belongs to the Special Issue Advanced Research in Oncology in 2024)
18 pages, 6146 KiB  
Article
A Near-Infrared Imaging System for Robotic Venous Blood Collection
by Zhikang Yang, Mao Shi, Yassine Gharbi, Qian Qi, Huan Shen, Gaojian Tao, Wu Xu, Wenqi Lyu and Aihong Ji
Sensors 2024, 24(22), 7413; https://doi.org/10.3390/s24227413 - 20 Nov 2024
Viewed by 242
Abstract
Venous blood collection is a widely used medical diagnostic technique, and with rapid advancements in robotics, robotic venous blood collection has the potential to replace traditional manual methods. The success of this robotic approach is heavily dependent on the quality of vein imaging. [...] Read more.
Venous blood collection is a widely used medical diagnostic technique, and with rapid advancements in robotics, robotic venous blood collection has the potential to replace traditional manual methods. The success of this robotic approach is heavily dependent on the quality of vein imaging. In this paper, we develop a vein imaging device based on the simulation analysis of vein imaging parameters and propose a U-Net+ResNet18 neural network for vein image segmentation. The U-Net+ResNet18 neural network integrates the residual blocks from ResNet18 into the encoder of the U-Net to form a new neural network. ResNet18 is pre-trained using the Bootstrap Your Own Latent (BYOL) framework, and its encoder parameters are transferred to the U-Net+ResNet18 neural network, enhancing the segmentation performance of vein images with limited labelled data. Furthermore, we optimize the AD-Census stereo matching algorithm by developing a variable-weight version, which improves its adaptability to image variations across different regions. Results show that, compared to U-Net, the BYOL+U-Net+ResNet18 method achieves an 8.31% reduction in Binary Cross-Entropy (BCE), a 5.50% reduction in Hausdorff Distance (HD), a 15.95% increase in Intersection over Union (IoU), and a 9.20% increase in the Dice coefficient (Dice), indicating improved image segmentation quality. The average error of the optimized AD-Census stereo matching algorithm is reduced by 25.69%, and the improvement of the image stereo matching performance is more obvious. Future research will explore the application of the vein imaging system in robotic venous blood collection to facilitate real-time puncture guidance. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of arm vein imaging.</p>
Full article ">Figure 2
<p>Simulate NIR propagation through arm tissue. (<b>a</b>) Radial two-dimensional cross-section of the local arm model. The black rectangles represent the skin, subcutaneous tissue, and muscle layers, from top to bottom, while the circle represents the radial cross-sections of the vein. (<b>b</b>) The ratio of photon densities at <span class="html-italic">x</span> = 2.00 mm. (<b>c</b>) The ratio of photon densities at <span class="html-italic">y</span> = 3.80 mm. (<b>d</b>) The simulation of photon density variation at an incident light wavelength of 850 nm. (<b>e</b>) Rectangular light source and light-receiving plane model. (<b>f</b>) Circular light source and light-receiving plane model. (<b>g</b>) The ratio of illuminance to mean illuminance on the <span class="html-italic">x</span>-axis.</p>
Full article ">Figure 3
<p>Vein imaging device.</p>
Full article ">Figure 4
<p>Schematic diagram of the vein imaging system for robotic venipuncture.</p>
Full article ">Figure 5
<p>(<b>a</b>) U-Net+ResNet18 neural network. (<b>b</b>) Neural network pre-training and model parameters migration.</p>
Full article ">Figure 6
<p>Cross-based Cost Aggregation. (<b>a</b>) Cross-based regions and Support regions, the cross shadows represent the cross-based regions, and the other shadows represent the support regions. (<b>b</b>) Horizontal aggregation, the blue arrows represent the aggregation direction. (<b>c</b>) Vertical aggregation.</p>
Full article ">Figure 7
<p>Vein image random transformation. (<b>a</b>) Original NIR vein image. (<b>b</b>,<b>c</b>) The vein image after random transformation.</p>
Full article ">Figure 8
<p>The variation of the loss function with epoch.</p>
Full article ">Figure 9
<p>NIR vein images segmentation results. (<b>a</b>) Original NIR vein images. (<b>b</b>) NIR vein images segmentation results using the Hessian matrix. (<b>c</b>) NIR vein images segmentation results using BYOL+U-Net+ResNet18 method. (<b>d</b>) Image binarization effect. (<b>e</b>) The labels corresponding to the original image.</p>
Full article ">Figure 10
<p>Variation of each neural network model metric with epochs. (<b>a</b>) Variation of BCE with epochs. (<b>b</b>) Variation of IoU with epochs. (<b>c</b>) Variation of Dice with epochs. (<b>d</b>) Variation of HD with epochs.</p>
Full article ">Figure 11
<p>Vein centerline extraction. (<b>a</b>) Pre-processed NIR greyscale map of veins. (<b>b</b>) Vein centerline extracted by the proposed algorithm in this paper. (<b>c</b>) The image after connecting and eliminating small connected regions using the contour connection algorithm (see the red circles).</p>
Full article ">Figure 12
<p>Comparison of results of stereo matching algorithms. (<b>a</b>) Left image. (<b>b</b>) Right image. (<b>c</b>) Disparity map of AD-Census algorithm. (<b>d</b>) Disparity map of optimization AD-Census algorithm.</p>
Full article ">Figure 13
<p>Vein image visualization process. (<b>a</b>) Original vein image collected by the camera. (<b>b</b>) Vein centerline extraction results. (<b>c</b>) Vein image segmentation results. (<b>d</b>) Disparity map.</p>
Full article ">
22 pages, 20043 KiB  
Article
Methodology for Object-Level Change Detection in Post-Earthquake Building Damage Assessment Based on Remote Sensing Images: OCD-BDA
by Zhengtao Xie, Zifan Zhou, Xinhao He, Yuguang Fu, Jiancheng Gu and Jiandong Zhang
Remote Sens. 2024, 16(22), 4263; https://doi.org/10.3390/rs16224263 - 15 Nov 2024
Viewed by 372
Abstract
Remote sensing and computer vision technologies are increasingly leveraged for rapid post-disaster building damage assessment, becoming a crucial and practical approach. In this context, the accuracy of employing various AI models in pixel-level change detection methods is significantly dependent on the consistency between [...] Read more.
Remote sensing and computer vision technologies are increasingly leveraged for rapid post-disaster building damage assessment, becoming a crucial and practical approach. In this context, the accuracy of employing various AI models in pixel-level change detection methods is significantly dependent on the consistency between pre- and post-disaster building images, particularly regarding variations in resolution, viewing angle, and lighting conditions; in object-level feature recognition methods, the low richness of semantic details of damaged buildings in images leads to a poor detection accuracy. This paper proposes a novel method, OCD-BDA (Object-Level Change Detection for Post-Disaster Building Damage Assessment), as an alternative to pixel-level change detection and object-level feature recognition methods. Inspired by human cognitive processes, this method incorporates three key steps: an efficient sample acquisition for object localization, labeling via HGC (Hierarchical and Gaussian Clustering), and model training and prediction for classification. Furthermore, this study establishes a change detection dataset based on Google Earth imagery of regions in Hatay Province before and after the Turkish earthquake. This dataset is characterized by pixel inconsistency and significant differences in photographic angles and lighting conditions between pre- and post-disaster images, making it a valuable test dataset for other studies. As a result, in the experiments of comparative generalization capabilities, OCD-BDA demonstrated a significant improvement, achieving an accuracy of 71%, which is twice that of the second-ranking method. Moreover, OCD-BDA exhibits superior performance in tasks with small sample amounts and rapid training time. With only 1% of the training samples, it achieves a prediction accuracy exceeding that of traditional transfer learning methods with 60% of samples. Additionally, it completes assessments across a large disaster area (450 km²) with 93% accuracy in under 23 min. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>The design concept of the object-level post-disaster building assessment methodology.</p>
Full article ">Figure 2
<p>Building damage assessment criteria.</p>
Full article ">Figure 3
<p>The overall architecture of OCD-BDA: YOLOv8 Lite for localization; HGC for labeling; and R50M2Net for classification.</p>
Full article ">Figure 4
<p>The network structure of YOLOv8 Lite.</p>
Full article ">Figure 5
<p>Flowchart of HGC.</p>
Full article ">Figure 6
<p>Comparison between HGC and traditional labeling methods.</p>
Full article ">Figure 7
<p>The overall architecture of R50M2Net.</p>
Full article ">Figure 8
<p>The regions contained in the Turkish earthquake dataset.</p>
Full article ">Figure 9
<p>Detection results of generalization capability experiments: (<b>a</b>) Samandag; (<b>b</b>) Antakia; (<b>c</b>) Klrlkhan; and (<b>d</b>) Iskenderun. Green represents Grade A, yellow represents Grade B, and red represents Grade C.</p>
Full article ">Figure 10
<p>Details of the comparative study.</p>
Full article ">Figure 11
<p>Accuracy comparison results under different training sample ratios: Ex1 represents traditional transfer learning; Ex2 refers to OCD-BDA (same below).</p>
Full article ">Figure 12
<p>The test results of two experiments: green represents Grade A, yellow represents Grade B, and red represents Grade C.</p>
Full article ">Figure 12 Cont.
<p>The test results of two experiments: green represents Grade A, yellow represents Grade B, and red represents Grade C.</p>
Full article ">Figure 12 Cont.
<p>The test results of two experiments: green represents Grade A, yellow represents Grade B, and red represents Grade C.</p>
Full article ">
13 pages, 2439 KiB  
Article
Distribution and Incorporation of Extracellular Vesicles into Chondrocytes and Synoviocytes
by Takashi Ohtsuki, Ikumi Sato, Ren Takashita, Shintaro Kodama, Kentaro Ikemura, Gabriel Opoku, Shogo Watanabe, Takayuki Furumatsu, Hiroshi Yamada, Mitsuru Ando, Kazunari Akiyoshi, Keiichiro Nishida and Satoshi Hirohata
Int. J. Mol. Sci. 2024, 25(22), 11942; https://doi.org/10.3390/ijms252211942 - 6 Nov 2024
Viewed by 564
Abstract
Osteoarthritis (OA) is a chronic disease affecting over 500 million people worldwide. As the population ages and obesity rates rise, the societal burden of OA is increasing. Pro-inflammatory cytokines, particularly interleukin-1β, are implicated in the pathogenesis of OA. Recent studies suggest that crosstalk [...] Read more.
Osteoarthritis (OA) is a chronic disease affecting over 500 million people worldwide. As the population ages and obesity rates rise, the societal burden of OA is increasing. Pro-inflammatory cytokines, particularly interleukin-1β, are implicated in the pathogenesis of OA. Recent studies suggest that crosstalk between cartilage and synovium contributes to OA development, but the mechanisms remain unclear. Extracellular vesicles (EVs) were purified from cell culture-conditioned medium via ultracentrifugation and confirmed using transmission electron microscopy, nanoparticle tracking analysis, and western blotting. We demonstrated that EVs were taken up by human synoviocytes and chondrocytes in vitro, while in vivo experiments revealed that fluorescent-labelled EVs injected into mouse joints were incorporated into chondrocytes and synoviocytes. EV uptake was significantly inhibited by dynamin-mediated endocytosis inhibitors, indicating that endocytosis plays a major role in this process. Additionally, co-culture experiments with HEK-293 cells expressing red fluorescent protein (RFP)-tagged CD9 and the chondrocytic cell line OUMS-27 confirmed the transfer of RFP-positive EVs across a 600-nm but not a 30-nm filter. These findings suggest that EVs from chondrocytes are released into joint fluid and taken up by cells within the cartilage, potentially facilitating communication between cartilage and synovium. The results underscore the importance of EVs in OA pathophysiology. Full article
(This article belongs to the Special Issue Molecular Metabolisms in Cartilage Health and Diseases: 3rd Edition)
Show Figures

Figure 1

Figure 1
<p>Characterization of EVs derived from OUMS-27. (<b>a</b>) Size distribution of EVs is shown. EVs ranged from 50 to 500 nm. (<b>b</b>) Western blot analysis of EV surface marker expression (CD9, Hsp70, and TSG101 chosen from the exosome panel) is shown. (<b>c</b>) Representative transmission electron microscopy (TEM) images of EVs (scale bar  =  100 nm). The black line indicates the mean and the red area indicates the standard deviation of five measurements of the same samples.</p>
Full article ">Figure 2
<p>Uptake of EVs in OUMS-27 cells. PKH67-labelled EVs were added to OUMS-27 cell culture medium. Green signals indicate PKH67-labelled EVs and blue refers to nuclei. EVs and nuclei were observed at 3 h, 6 h, 12 h and 24 h by fluorescence microscopy (scale bar  =  50 μm). Green signals indicate PKH67-labelled EVs and blue refers to nuclei.</p>
Full article ">Figure 3
<p>Inhibition of EV uptake by dynasore (dynamin GTPase inhibitor) in OUMS-27. OUMS-27 cells were pretreated with (20 μM and 80 μM) or without dynasore for 24 h, then PKH67-EVs were added to the OUMS-27 cell culture medium (scale bar  =  100 μm). Green signals indicate PKH67-labelled EVs and blue refers to nuclei.</p>
Full article ">Figure 4
<p>RFP-tagged CD9 expressing EVs transfer. In the co-culture system, RFP-CD9 HEK-293 cells, OUMS-27 cells, and chondrocytes and synoviocytes from patients with OA were seeded in different culture chambers joined horizontally with sharing medium but separated by filters. RFP-CD9 HEK-293 and OUMS-27 cells were co-cultured with a 30-nm filter (<b>a</b>) or a 600-nm filter (<b>b</b>). RFP-CD9 HEK-293 cells and chondrocytes (<b>c</b>) and synoviocytes (<b>d</b>) of patients with OA were co-cultured with a 600-nm filter. (<b>e</b>) RFP-CD9 signals were rarely observed in OUMS-27 cells with a 30-nm filter, indicating that EVs did not pass through the filter and did not incorporate into OUMS-27 cells. (<b>f</b>) RFP-CD9 signals were observed in OUMS-27 cells with the 600-nm filter, indicating that EVs passed through the filter and incorporated into OUMS-27. (<b>g</b>) RFP-CD9 signals were observed in the chondrocytes. (<b>h</b>) RFP-CD9 signals were observed in the synoviocytes. Red refers to RFP-CD9 of EVs, green refers to filamentous actin (F-actin) and blue refers to nuclei. (scale bar  =  20 µm). (<b>a</b>,<b>b</b>) Blue color indicates the HEK-293 cells and purple color indicates the OUMS-27 cells. (<b>c</b>,<b>d</b>) Blue color indicates the HEK-293 cells and light blue color indicates chondrocytes and pink color indicates synoviocytes as indicated below.</p>
Full article ">Figure 5
<p>Injection of labelled EVs into the mouse joint cavity and uptake of labelled EVs in mouse joint tissues. Particles (3.0 × 10<sup>8</sup>) of immunofluorescent-labelled EVs were injected into the left hind knee joint of C57BL/6J mice. Mice were sacrificed, and femurs were prepared at 24 h after injection. Frozen sections (5 μm) of joint tissue were made using the Kawamoto’s film method. (<b>a</b>) Red fluorescent signals were observed in chondrocytes. (<b>b</b>) Red fluorescent signals were observed in synoviocytes. Red refers to Mem Dye-Deep Red labelled EVs and blue refers to nuclei (scale bar  =  500 μm in the top, 20 μm in the middle, 5 μm in the bottom).</p>
Full article ">
22 pages, 5584 KiB  
Article
Enhanced Magnetic Resonance Imaging-Based Brain Tumor Classification with a Hybrid Swin Transformer and ResNet50V2 Model
by Abeer Fayez Al Bataineh, Khalid M. O. Nahar, Hayel Khafajeh, Ghassan Samara, Raed Alazaidah, Ahmad Nasayreh, Ayah Bashkami, Hasan Gharaibeh and Waed Dawaghreh
Appl. Sci. 2024, 14(22), 10154; https://doi.org/10.3390/app142210154 - 6 Nov 2024
Viewed by 564
Abstract
Brain tumors can be serious; consequently, rapid and accurate detection is crucial. Nevertheless, a variety of obstacles, such as poor imaging resolution, doubts over the accuracy of data, a lack of diverse tumor classes and stages, and the possibility of misunderstanding, present challenges [...] Read more.
Brain tumors can be serious; consequently, rapid and accurate detection is crucial. Nevertheless, a variety of obstacles, such as poor imaging resolution, doubts over the accuracy of data, a lack of diverse tumor classes and stages, and the possibility of misunderstanding, present challenges to achieve an accurate and final diagnosis. Effective brain cancer detection is crucial for patients’ safety and health. Deep learning systems provide the capability to assist radiologists in quickly and accurately detecting diagnoses. This study presents an innovative deep learning approach that utilizes the Swin Transformer. The suggested method entails integrating the Swin Transformer with the pretrained deep learning model Resnet50V2, called (SwT+Resnet50V2). The objective of this modification is to decrease memory utilization, enhance classification accuracy, and reduce training complexity. The self-attention mechanism of the Swin Transformer identifies distant relationships and captures the overall context. Resnet 50V2 improves both accuracy and training speed by extracting adaptive features from the Swin Transformer’s dependencies. We evaluate the proposed framework using two publicly accessible brain magnetic resonance imaging (MRI) datasets, each including two and four distinct classes, respectively. Employing data augmentation and transfer learning techniques enhances model performance, leading to more dependable and cost-effective training. The suggested model achieves an impressive accuracy of 99.9% on the binary-labeled dataset and 96.8% on the four-labeled dataset, outperforming the VGG16, MobileNetV2, Resnet50V2, EfficientNetV2B3, ConvNeXtTiny, and convolutional neural network (CNN) algorithms used for comparison. This demonstrates that the Swin transducer, when combined with Resnet50V2, is capable of accurately diagnosing brain tumors. This method leverages the combination of SwT+Resnet50V2 to create an innovative diagnostic tool. Radiologists have the potential to accelerate and improve the detection of brain tumors, leading to improved patient outcomes and reduced risks. Full article
(This article belongs to the Special Issue Advances in Bioinformatics and Biomedical Engineering)
Show Figures

Figure 1

Figure 1
<p>Workflow diagram of the proposed brain tumor detection method.</p>
Full article ">Figure 2
<p>Architecture of Swin Transformer.</p>
Full article ">Figure 3
<p>Architecture of Resnet50V2.</p>
Full article ">Figure 4
<p>Instances of the types of brain tumor in MRI images.</p>
Full article ">Figure 5
<p>Performance evaluation on Bra35H dataset.</p>
Full article ">Figure 6
<p>Training and validation metrics (accuracy and loss) for (SwT+Resnet50V2) on Bra35H dataset.</p>
Full article ">Figure 7
<p>Comparison of confusion matrices for all models using the Bra35H dataset.</p>
Full article ">Figure 8
<p>Performance evaluation on Kaggle dataset.</p>
Full article ">Figure 9
<p>Training and validation metrics (accuracy and loss) for (SwT+Resnet50V2) on Kaggle dataset.</p>
Full article ">Figure 10
<p>Comparison of confusion matrices for all models using the Kaggle dataset.</p>
Full article ">
19 pages, 3033 KiB  
Article
A Cross-Attention-Based Class Alignment Network for Cross-Subject EEG Classification in a Heterogeneous Space
by Sufan Ma and Dongxiao Zhang
Sensors 2024, 24(21), 7080; https://doi.org/10.3390/s24217080 - 3 Nov 2024
Viewed by 534
Abstract
Background: Domain adaptation (DA) techniques have emerged as a pivotal strategy in addressing the challenges of cross-subject classification. However, traditional DA methods are inherently limited by the assumption of a homogeneous space, requiring that the source and target domains share identical feature dimensions [...] Read more.
Background: Domain adaptation (DA) techniques have emerged as a pivotal strategy in addressing the challenges of cross-subject classification. However, traditional DA methods are inherently limited by the assumption of a homogeneous space, requiring that the source and target domains share identical feature dimensions and label sets, which is often impractical in real-world applications. Therefore, effectively addressing the challenge of EEG classification under heterogeneous spaces has emerged as a crucial research topic. Methods: We present a comprehensive framework that addresses the challenges of heterogeneous spaces by implementing a cross-domain class alignment strategy. We innovatively construct a cross-encoder to effectively capture the intricate dependencies between data across domains. We also introduce a tailored class discriminator accompanied by a corresponding loss function. By optimizing the loss function, we facilitate the aggregation of features with corresponding classes between the source and target domains, while ensuring that features from non-corresponding classes are dispersed. Results: Extensive experiments were conducted on two publicly available EEG datasets. Compared to advanced methods that combine label alignment with transfer learning, our method demonstrated superior performance across five heterogeneous space scenarios. Notably, in four heterogeneous label space scenarios, our method outperformed the advanced methods by an average of 7.8%. Moreover, in complex scenarios involving both heterogeneous label spaces and heterogeneous feature spaces, our method outperformed the state-of-the-art methods by an average of 4.1%. Conclusions: This paper presents an efficient model for cross-subject EEG classification under heterogeneous spaces, which significantly addresses the challenges of EEG classification within heterogeneous spaces, thereby opening up new perspectives and avenues for research in related fields. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

Figure 1
<p>Different DA scenarios. A, B, …, H represent different classes. “?” represents unknown classes.</p>
Full article ">Figure 2
<p>The proposed framework. Blue lines indicate the data flow within the target domain; orange lines depict the data flow originating from the source domain.</p>
Full article ">Figure 3
<p>Cross-encoder framework. The blue line represents the target domain data flow, while the orange line signifies the source domain data flow.</p>
Full article ">Figure 4
<p>Comparison results with existing methods across the first four scenarios. The horizontal axis represents the serial number of classification possibilities in each scenario, while the vertical axis indicates the average accuracy of the nine target subjects.</p>
Full article ">Figure 5
<p>Comparison results with existing methods in Scenario 5. The horizontal axis represents the serial number of subjects in the target domain, while the vertical axis represents their accuracy.</p>
Full article ">Figure 6
<p>Comparison of effects with and without cross-encoder. The horizontal axis depicts the number of subjects, and the vertical axis continues to represent the classification accuracy. Scenario 1: <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>→</mo> <mn>1</mn> <mo>,</mo> <mn>4</mn> </mrow> </semantics></math>. Scenario 2: <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>4</mn> <mo>→</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>3</mn> </mrow> </semantics></math>. Scenario 3: <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>→</mo> <mn>3</mn> <mo>,</mo> <mn>4</mn> </mrow> </semantics></math>. Scenario 4: <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>→</mo> <mn>3</mn> <mo>,</mo> <mn>4</mn> </mrow> </semantics></math>. Scenario 5: <math display="inline"><semantics> <mrow> <mo>+</mo> <mo>,</mo> <mo>−</mo> <mo>→</mo> <mn>3</mn> <mo>,</mo> <mn>4</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Original feature distributions (<b>top</b>) and feature distributions extracted by our framework (<b>bottom</b>). Blue: left hand, gray:right hand, green: tongue, red: feet.</p>
Full article ">
23 pages, 3632 KiB  
Article
Towards the Development of an Optical Biosensor for the Detection of Human Blood for Forensic Analysis
by Hayley Costanzo, Maxine den Hartog, James Gooch and Nunzianda Frascione
Sensors 2024, 24(21), 7081; https://doi.org/10.3390/s24217081 - 3 Nov 2024
Viewed by 746
Abstract
Blood is a common biological fluid in forensic investigations, offering significant evidential value. Currently employed presumptive blood tests often lack specificity and are sample destructive, which can compromise downstream analysis. Within this study, the development of an optical biosensor for detecting human red [...] Read more.
Blood is a common biological fluid in forensic investigations, offering significant evidential value. Currently employed presumptive blood tests often lack specificity and are sample destructive, which can compromise downstream analysis. Within this study, the development of an optical biosensor for detecting human red blood cells (RBCs) has been explored to address such limitations. Aptamer-based biosensors, termed aptasensors, offer a promising alternative due to their high specificity and affinity for target analytes. Aptamers are short, single-stranded DNA or RNA sequences that form stable three-dimensional structures, allowing them to bind to specific targets selectively. A nanoflare design has been employed within this work, consisting of a quenching gold nanoparticle (AuNP), DNA aptamer sequences, and complementary fluorophore-labelled flares operating through a fluorescence resonance energy transfer (FRET) mechanism. In the presence of RBCs, the aptamer–flare complex is disrupted, restoring fluorescence and indicating the presence of blood. Two aptamers, N1 and BB1, with a demonstrated binding affinity to RBCs, were selected for inclusion within the nanoflare. This study aimed to optimise three features of the design: aptamer conjugation to AuNPs, aptamer hybridisation to complementary flares, and flare displacement in the presence of RBCs. Fluorescence restoration was achieved with both the N1 and BB1 nanoflares, demonstrating the potential for a functional biosensor to be utilised within the forensic workflow. It is hoped that introducing such an aptasensor could enhance the forensic workflow. This aptasensor could replace current tests with a specific and sensitive reagent that can be used for real-time detection, improving the standard of forensic blood analysis. Full article
(This article belongs to the Special Issue Nanomaterials for Sensor Applications)
Show Figures

Figure 1

Figure 1
<p>Aptamer-based nanoflare for detecting human red blood cells. The nanoflare is initially quenched. Upon the addition of red blood cells, aptamers bind to the cells, displacing the reporter flares. The fluorescent signal of the flares can then be measured with a spectrophotometer.</p>
Full article ">Figure 2
<p>Secondary structure predictions of aptamers (<b>A</b>) N1 and (<b>B</b>) BB1 [<a href="#B22-sensors-24-07081" class="html-bibr">22</a>].</p>
Full article ">Figure 3
<p>Mechanism of the streptavidin bead displacement assay used to test flare sequence displacement from the aptamer sequence.</p>
Full article ">Figure 4
<p>The amount of flares bound (nmoles) to the complementary aptamer sequence for (<b>A</b>) N1 and (<b>B</b>) BB1. (<span class="html-italic">n</span> = 3 independent measurements, error bars = s.d.).</p>
Full article ">Figure 5
<p>The percentage of flares bound to the complementary aptamer sequence for (<b>A</b>) N1 and (<b>B</b>) BB1. (<span class="html-italic">n</span> = 3 independent measurements, error bars = s.d.).</p>
Full article ">Figure 6
<p>The percentage of flares displaced from N1 or BB1 aptamers when incubated with the target RBCs. (<span class="html-italic">n</span> = 3 independent measurements, error bars = s.d.).</p>
Full article ">Figure 7
<p>Aptamers per AuNP obtained through freeze-directed conjugation with loading ratios of ×0, ×50, ×150, ×300, ×600, and ×1200 of (<b>A</b>) N1 and (<b>B</b>) BB1. (<span class="html-italic">n</span> = 3 independent measurements, error bars = s.d.).</p>
Full article ">Figure 8
<p>DLS scans showing the particle size distribution of (<b>A</b>) unconjugated 15 nm citrate AuNPs, (<b>B</b>) AuNPs with N1 aptamer conjugated to the surface, and (<b>C</b>) AuNPs with BB1 aptamer conjugated to the surface. The average hydrodynamic radius is given for each scan (d.nm).</p>
Full article ">Figure 9
<p>DLS scans showing the particle size distribution of (<b>A</b>) unconjugated 15 nm citrate AuNPs, (<b>B</b>) N1 nanoflares, and (<b>C</b>) BB1 nanoflares. The average hydrodynamic radius is given for each scan (d.nm).</p>
Full article ">Figure 10
<p>Fluorescent intensity measurements of (1) unconstructed nanoflare components, (2) the constructed nanoflare, and (3) displaced flares after incubation of the nanoflare with RBCs (<span class="html-italic">n</span> = 3 independent measurements, error bars = s.d.).</p>
Full article ">
28 pages, 27981 KiB  
Article
Acoustic Imaging Learning-Based Approaches for Marine Litter Detection and Classification
by Pedro Alves Guedes, Hugo Miguel Silva, Sen Wang, Alfredo Martins, José Almeida and Eduardo Silva
J. Mar. Sci. Eng. 2024, 12(11), 1984; https://doi.org/10.3390/jmse12111984 - 3 Nov 2024
Viewed by 591
Abstract
This paper introduces an advanced acoustic imaging system leveraging multibeam water column data at various frequencies to detect and classify marine litter. This study encompasses (i) the acquisition of test tank data for diverse types of marine litter at multiple acoustic frequencies; (ii) [...] Read more.
This paper introduces an advanced acoustic imaging system leveraging multibeam water column data at various frequencies to detect and classify marine litter. This study encompasses (i) the acquisition of test tank data for diverse types of marine litter at multiple acoustic frequencies; (ii) the creation of a comprehensive acoustic image dataset with meticulous labelling and formatting; (iii) the implementation of sophisticated classification algorithms, namely support vector machine (SVM) and convolutional neural network (CNN), alongside cutting-edge detection algorithms based on transfer learning, including single-shot multibox detector (SSD) and You Only Look once (YOLO), specifically YOLOv8. The findings reveal discrimination between different classes of marine litter across the implemented algorithms for both detection and classification. Furthermore, cross-frequency studies were conducted to assess model generalisation, evaluating the performance of models trained on one acoustic frequency when tested with acoustic images based on different frequencies. This approach underscores the potential of multibeam data in the detection and classification of marine litter in the water column, paving the way for developing novel research methods in real-life environments. Full article
(This article belongs to the Special Issue Applications of Underwater Acoustics in Ocean Engineering)
Show Figures

Figure 1

Figure 1
<p>Marine litter in the water column. Courtesy of Unsplash by Naja Jensen.</p>
Full article ">Figure 2
<p>Kongsberg M3 Multibeam High-Frequency Echosounder system setup in the test tank. (<b>a</b>) Test tank setup, (<b>b</b>) MBES capturing the Wooden deck in the water column.</p>
Full article ">Figure 3
<p>Marine debris used for the test tank dataset. PVC Squares (1); PVC traffic cone (2); Wooden deck (3); vinyl sheet (4); fish net (5).</p>
Full article ">Figure 4
<p>High-level architecture for the MBES sensor and acoustic imaging for detection and classification problems.</p>
Full article ">Figure 5
<p>Raw acoustic images of a PVC square at the same range, with varying FOV due to the different acoustic frequencies. (<b>a</b>) Raw acoustic image of 1200 kHz, (<b>b</b>) Raw acoustic image of 1400 kHz.</p>
Full article ">Figure 6
<p>Cartesian acoustic image of a PVC square in the water column.</p>
Full article ">Figure 7
<p>Polar acoustic image of a PVC square in the water column.</p>
Full article ">Figure 8
<p>Class Activation Map applied to the CNN with a polar image of a PVC square as an input.</p>
Full article ">Figure 9
<p>SSD model inference in two polar acoustic images with multiple targets with the target detection confidence.</p>
Full article ">Figure 10
<p>YOLO8 model inference in polar acoustic images with multiple targets with the target detection confidence.</p>
Full article ">
18 pages, 15722 KiB  
Article
PANDA: A Polarized Attention Network for Enhanced Unsupervised Domain Adaptation in Semantic Segmentation
by Chiao-Wen Kao, Wei-Ling Chang, Chun-Chieh Lee and Kuo-Chin Fan
Electronics 2024, 13(21), 4302; https://doi.org/10.3390/electronics13214302 - 31 Oct 2024
Viewed by 673
Abstract
Unsupervised domain adaptation (UDA) focuses on transferring knowledge from the labeled source domain to the unlabeled target domain, reducing the costs of manual data labeling. The main challenge in UDA is bridging the substantial feature distribution gap between the source and target domains. [...] Read more.
Unsupervised domain adaptation (UDA) focuses on transferring knowledge from the labeled source domain to the unlabeled target domain, reducing the costs of manual data labeling. The main challenge in UDA is bridging the substantial feature distribution gap between the source and target domains. To address this, we propose Polarized Attention Network Domain Adaptation (PANDA), a novel approach that leverages Polarized Self-Attention (PSA) to capture the intricate relationships between the source and target domains, effectively mitigating domain discrepancies. PANDA integrates both channel and spatial information, allowing it to capture detailed features and overall structures simultaneously. Our proposed method significantly outperforms current state-of-the-art unsupervised domain adaptation (UDA) techniques for semantic segmentation tasks. Specifically, it achieves a notable improvement in mean intersection over union (mIoU), with a 0.2% increase for the GTA→Cityscapes benchmark and a substantial 1.4% gain for the SYNTHIA→Cityscapes benchmark. As a result, our method attains mIoU scores of 76.1% and 68.7%, respectively, which reflect meaningful advancements in model accuracy and domain adaptation performance. Full article
(This article belongs to the Special Issue Digital Signal and Image Processing for Multimedia Technology)
Show Figures

Figure 1

Figure 1
<p>Overview of the proposed PANDA architecture.</p>
Full article ">Figure 2
<p>The structure of PSA. Channel-only self-attention (CSA) is in the left half, and spatial-only self-attention (SSA) is in the right half. LN is layer normalization.</p>
Full article ">Figure 3
<p>Illustration of the PSA block under different connection schemes: (<b>a</b>) Parallel layout and (<b>b</b>) sequential layout.</p>
Full article ">Figure 4
<p>Qualitative comparison of PANDA with previous methods on GTA→Cityscapes.</p>
Full article ">Figure 5
<p>Qualitative comparison of PANDA with previous methods on SYNTHIA→Cityscapes.</p>
Full article ">Figure 6
<p>Failure cases of segmentation results on GTA→Cityscapes (rows 1 and 2) and SYNTHIA→Cityscapes (rows 3 and 4).</p>
Full article ">
19 pages, 7193 KiB  
Article
Intelligent Fault Diagnosis of Planetary Gearbox Across Conditions Based on Subdomain Distribution Adversarial Adaptation
by Songjun Han, Zhipeng Feng, Ying Zhang, Minggang Du and Yang Yang
Sensors 2024, 24(21), 7017; https://doi.org/10.3390/s24217017 - 31 Oct 2024
Viewed by 429
Abstract
Sensory data are the basis for the intelligent health state awareness of planetary gearboxes, which are the critical components of electromechanical systems. Despite the advantages of intelligent diagnostic techniques for detecting intricate fault patterns and improving diagnostic speed, challenges still persist, which include [...] Read more.
Sensory data are the basis for the intelligent health state awareness of planetary gearboxes, which are the critical components of electromechanical systems. Despite the advantages of intelligent diagnostic techniques for detecting intricate fault patterns and improving diagnostic speed, challenges still persist, which include the limited availability of fault data, the lack of labeling information and the discrepancies in features across different signals. Targeting this issue, a subdomain distribution adversarial adaptation diagnosis method (SDAA) is proposed for faults diagnosis of planetary gearboxes across different conditions. Firstly, nonstationary vibration signals are converted into a two-dimensional time–frequency representation to extract intrinsic information and avoid frequency overlapping. Secondly, an adversarial training mechanism is designed to evaluate subclass feature distribution differences between the source and target domain. A conditional distribution adaptation is employed to account for correlations among data from different subclasses. Finally, the proposed method is validated through experiments on planetary gearboxes, and the results demonstrate that SDAA can effectively diagnose faults under crossing conditions with an accuracy of 96.7% in diagnosing gear faults and 95.2% in diagnosing planet bearing faults. It outperforms other methods in both accuracy and model robustness. This confirms that this approach can refine domain-invariant information for transfer learning with less information loss from the sub-class level of fault data instead of the overall class level. Full article
(This article belongs to the Special Issue Emerging Sensing Technologies for Machine Health State Awareness)
Show Figures

Figure 1

Figure 1
<p>Diagram of data distribution adaptation among different fault classes: (<b>a</b>) averaged distribution of faults; (<b>b</b>) data distribution of different faults.</p>
Full article ">Figure 2
<p>Overfitting during deep network training.</p>
Full article ">Figure 3
<p>Schematic structure of residual block.</p>
Full article ">Figure 4
<p>Domain adversarial training process.</p>
Full article ">Figure 5
<p>Domain confusion training process.</p>
Full article ">Figure 6
<p>Transfer diagnostic framework for SDAA.</p>
Full article ">Figure 7
<p>Test rig of one-staged planetary gearbox. (<b>a</b>) Experimental test rig; (<b>b</b>) diagram of the gearbox structure.</p>
Full article ">Figure 8
<p>Damage parts in planetary gearboxes. (<b>a</b>) Planet gear fault; (<b>b</b>) sun gear fault; (<b>c</b>) ring gear fault; (<b>d</b>) inner race fault; (<b>e</b>) outer race fault; (<b>f</b>) rolling element fault.</p>
Full article ">Figure 9
<p>Sample division diagram in vibration signal.</p>
Full article ">Figure 10
<p>Motor speed curve under two time-varying mode. (<b>a</b>) Linearity; (<b>b</b>) sinusoidal.</p>
Full article ">Figure 11
<p>Comparison of diagnostic results for different methods. (<b>a</b>) Method Performance; (<b>b</b>) accuracy variation.</p>
Full article ">Figure 12
<p>The convergence of training on the V<sub>in</sub>-V<sub>3</sub> task. (<b>a</b>) Training loss; (<b>b</b>) test loss; (<b>c</b>) accuracy.</p>
Full article ">Figure 13
<p>Fault diagnosis performance on the V<sub>in</sub>-V<sub>3</sub> task. (<b>a</b>) ResNet18; (<b>b</b>) DAN; (<b>c</b>) DDAN; (<b>d</b>) DAAN; (<b>e</b>) SDAA.</p>
Full article ">Figure 14
<p>Comparison of diagnostic results for different methods.</p>
Full article ">Figure 15
<p>The convergence of adversarial adaptation methods on the B<sub>in</sub>-B<sub>1</sub> task. (<b>a</b>) Test loss; (<b>b</b>) accuracy.</p>
Full article ">Figure 16
<p>Feature visualization of different methods in task B<sub>3</sub>-B<sub>1</sub>. (<b>a</b>) ResNet18; (<b>b</b>) DAN; (<b>c</b>) DDAN; (<b>d</b>) DAAN; (<b>e</b>) SDAA.</p>
Full article ">
16 pages, 8285 KiB  
Technical Note
A Feature-Driven Inception Dilated Network for Infrared Image Super-Resolution Reconstruction
by Jiaxin Huang, Huicong Wang, Yuhan Li and Shijian Liu
Remote Sens. 2024, 16(21), 4033; https://doi.org/10.3390/rs16214033 - 30 Oct 2024
Viewed by 437
Abstract
Image super-resolution (SR) algorithms based on deep learning yield good visual performances on visible images. Due to the blurred edges and low contrast of infrared (IR) images, methods transferred directly from visible images to IR images have a poor performance and ignore the [...] Read more.
Image super-resolution (SR) algorithms based on deep learning yield good visual performances on visible images. Due to the blurred edges and low contrast of infrared (IR) images, methods transferred directly from visible images to IR images have a poor performance and ignore the demands of downstream detection tasks. Therefore, an Inception Dilated Super-Resolution (IDSR) network with multiple branches is proposed. A dilated convolutional branch captures high-frequency information to reconstruct edge details, while a non-local operation branch captures long-range dependencies between any two positions to maintain the global structure. Furthermore, deformable convolution is utilized to fuse features extracted from different branches, enabling adaptation to targets of various shapes. To enhance the detection performance of low-resolution (LR) images, we crop the images into patches based on target labels before feeding them to the network. This allows the network to focus on learning the reconstruction of the target areas only, reducing the interference of background areas in the target areas’ reconstruction. Additionally, a feature-driven module is cascaded at the end of the IDSR network to guide the high-resolution (HR) image reconstruction with feature prior information from a detection backbone. This method has been tested on the FLIR Thermal Dataset and the M3FD Dataset and compared with five mainstream SR algorithms. The final results demonstrate that our method effectively maintains image texture details. More importantly, our method achieves 80.55% mAP, outperforming other methods on FLIR Dataset detection accuracy, and with 74.7% mAP outperforms other methods on M3FD Dataset detection accuracy. Full article
Show Figures

Figure 1

Figure 1
<p>The overall structure of the proposed method, which mainly consists of three parts: a data preprocessing method to crop the images into patches, a SR reconstruction network to generate SR images and a feature-driven module to improve the detection accuracy.</p>
Full article ">Figure 2
<p>The architecture of the proposed ISDR for image super-resolution.</p>
Full article ">Figure 3
<p>The details of Inception Dilated Mixer (IDM).</p>
Full article ">Figure 4
<p>Frequency magnitude from 8 output channels of high-frequency extractor and low-frequency extractor.</p>
Full article ">Figure 5
<p>Super-resolution reconstruction results for LR images from the FLIR dataset (200 k iterations). Each two rows represent a scene, and from <b>top</b> to <b>bottom</b> are FLIR-08989 and FLIR-08951.</p>
Full article ">Figure 6
<p>The analysis of loss weight <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>λ</mi> <mo>,</mo> <mi>μ</mi> <mo>)</mo> </mrow> </semantics></math> selection in our method.</p>
Full article ">Figure 7
<p>Super-resolution reconstruction results for LR images from the FLIR dataset by feature-driven IDSR (our method, 300 k iterations). Each two rows represent a scene, and from <b>top</b> to <b>bottom</b> are FLIR-08989 and FLIR-08951.</p>
Full article ">Figure 8
<p>Object detection (YOLOv7) results for SR images from the FLIR dataset by feature-driven IDSR (our method, 300 k iterations). Each two rows represent a scene, and from <b>top</b> to <b>bottom</b> are FLIR-09401 and FLIR-09572.</p>
Full article ">
15 pages, 1886 KiB  
Article
Predicting the Pathway Involvement of All Pathway and Associated Compound Entries Defined in the Kyoto Encyclopedia of Genes and Genomes
by Erik D. Huckvale and Hunter N. B. Moseley
Metabolites 2024, 14(11), 582; https://doi.org/10.3390/metabo14110582 - 27 Oct 2024
Viewed by 543
Abstract
Background/Objectives: Predicting the biochemical pathway involvement of a compound could facilitate the interpretation of biological and biomedical research. Prior prediction approaches have largely focused on metabolism, training machine learning models to solely predict based on metabolic pathways. However, there are many other [...] Read more.
Background/Objectives: Predicting the biochemical pathway involvement of a compound could facilitate the interpretation of biological and biomedical research. Prior prediction approaches have largely focused on metabolism, training machine learning models to solely predict based on metabolic pathways. However, there are many other types of pathways in cells and organisms that are of interest to biologists. Methods: While several publications have made use of the metabolites and metabolic pathways available in the Kyoto Encyclopedia of Genes and Genomes (KEGG), we downloaded all the compound entries with pathway annotations available in the KEGG. From these data, we constructed a dataset where each entry contained features representing compounds combined with features representing pathways, followed by a binary label indicating whether the given compound is associated with the given pathway. We trained multi-layer perceptron binary classifiers on variations of this dataset. Results: The models trained on 6485 KEGG compounds and 502 pathways scored an overall mean Matthews correlation coefficient (MCC) performance of 0.847, a median MCC of 0.848, and a standard deviation of 0.0098. Conclusions: This performance on all 502 KEGG pathways represents a roughly 6% improvement over the performance of models trained on only the 184 KEGG metabolic pathways, which had a mean MCC of 0.800 and a standard deviation of 0.021. These results demonstrate the capability to effectively predict biochemical pathways in general, in addition to those specifically related to metabolism. Moreover, the improvement in the performance demonstrates additional transfer learning with the inclusion of non-metabolic pathways. Full article
(This article belongs to the Special Issue Machine Learning Applications in Metabolomics Analysis)
Show Figures

Figure 1

Figure 1
<p>Distribution of MCCs across CV iterations for each dataset.</p>
Full article ">Figure 2
<p>L1 pathway MCC and size as well as the number of pathway features with a positive value.</p>
Full article ">Figure 3
<p>Distribution of pathway and compound size in the full KEGG dataset: (<b>a</b>) size distribution of all the pathways; (<b>b</b>) distribution of pathways of a size less than 1000; and (<b>c</b>) size distribution of the compounds. Size in this context is the number of non-hydrogen atoms in a compound or pathway (summed across the compounds associated with the pathway).</p>
Full article ">Figure 4
<p>Distribution of the MCCs of individual pathways and compounds in the full KEGG dataset: (<b>a</b>) distribution of the pathway MCCs; and (<b>b</b>) distribution of the compound MCC.</p>
Full article ">Figure 5
<p>Relation of the pathway and compound size to the individual MCC of the full KEGG dataset: (<b>a</b>) pathway size to pathway MCC; (<b>b</b>) pathway size to pathway MCC with log scale <span class="html-italic">x</span>-axis; (<b>c</b>) compound size to compound MCC; and (<b>d</b>) compound size to compound MCC with log scale <span class="html-italic">x</span>-axis.</p>
Full article ">
24 pages, 6053 KiB  
Article
Gestational Diabetes-like Fuels Impair Mitochondrial Function and Long-Chain Fatty Acid Uptake in Human Trophoblasts
by Kyle M. Siemers, Lisa A. Joss-Moore and Michelle L. Baack
Int. J. Mol. Sci. 2024, 25(21), 11534; https://doi.org/10.3390/ijms252111534 - 27 Oct 2024
Viewed by 1270
Abstract
In the parent, gestational diabetes mellitus (GDM) causes both hyperglycemia and hyperlipidemia. Despite excess lipid availability, infants exposed to GDM are at risk for essential long-chain polyunsaturated fatty acid (LCPUFA) deficiency. Isotope studies have confirmed less LCPUFA transfer from the parent to the [...] Read more.
In the parent, gestational diabetes mellitus (GDM) causes both hyperglycemia and hyperlipidemia. Despite excess lipid availability, infants exposed to GDM are at risk for essential long-chain polyunsaturated fatty acid (LCPUFA) deficiency. Isotope studies have confirmed less LCPUFA transfer from the parent to the fetus, but how diabetic fuels impact placental fatty acid (FA) uptake and lipid droplet partitioning is not well-understood. We evaluated the effects of high glucose conditions, high lipid conditions, and their combination on trophoblast growth, viability, mitochondrial bioenergetics, BODIPY-labeled fatty acid (FA) uptake, and lipid droplet dynamics. The addition of four carbons or one double bond to FA acyl chains dramatically affected the uptake in both BeWo and primary isolated cytotrophoblasts (CTBs). The uptake was further impacted by media exposure. The combination-exposed trophoblasts had more mitochondrial protein (p = 0.01), but impaired maximal and spare respiratory capacities (p < 0.001 and p < 0.0001), as well as lower viability (p = 0.004), due to apoptosis. The combination-exposed trophoblasts had unimpaired uptake of BODIPY C12 but had significantly less whole-cell and lipid droplet uptake of BODIPY C16, with an altered lipid droplet count, area, and subcellular localization, whereas these differences were not seen with individual high glucose or lipid exposure. These findings bring us closer to understanding how GDM perturbs active FA transport to increase the risk of adverse outcomes from placental and neonatal lipid accumulation alongside LCPUFA deficiency. Full article
(This article belongs to the Special Issue Molecular Pathogenesis and Treatment of Pregnancy Complications)
Show Figures

Figure 1

Figure 1
<p>BeWo growth and viability in high glucose, high lipid, and combined conditions. BeWo cells were cultured in control, high glucose, high lipid, and combination media for 72 h and then uniformly plated to 24-well plates and cultured for 96 h in respective media. Daily cell counts were used to estimate growth over time (<b>A</b>) and calculate doubling time and fold change (<b>B</b>) from 24 h to 96 h. An apoptosis assay using flow cytometry was used to quantify APC-Annexin V (APC-A)-and PE-propidium iodide (PE-A)-tagged cells (<b>C</b>) to identify the percent of viable and apoptotic BeWo following 96 h of media exposure (<b>D</b>). <span class="html-italic">n</span> = 3/group; * <span class="html-italic">p</span> &lt; 0.05, ** <span class="html-italic">p</span> &lt; 0.01 by one-way ANOVA with Tukey’s multiple comparison test.</p>
Full article ">Figure 2
<p>Mitochondrial protein abundance in high glucose-, high lipid-, and combination-exposed BeWo. Representative western blot (<b>A</b>), relative abundance (<b>B</b>,<b>C</b>), and ratio (<b>D</b>) of mitochondrial proteins TOM20 and VDAC in BeWo lysate. Densitometry was normalized to the average of controls on each well’s respective blot (<span class="html-italic">n</span> = 6/exposure group). * <span class="html-italic">p</span> &lt; 0.05, ** <span class="html-italic">p</span> &lt; 0.01 by one-way ANOVA with Tukey’s multiple comparison test. See full, unedited blots in <a href="#app1-ijms-25-11534" class="html-app">Figure S1</a>.</p>
Full article ">Figure 3
<p>Cellular bioenergetics of control, high glucose-, high lipid-, and combination-exposed BeWo. Average oxygen consumption rate (OCR), which estimates cellular respiration, is shown as a trace across a mitochondrial stress test (<b>A</b>) and comparisons of average basal respiration (<b>B</b>), maximum respiration (<b>C</b>), and spare respiratory capacity (<b>D</b>) by group in BeWo cultured in control, high glucose, high lipid, and combination media. Average extracellular acidification (ECAR) estimates are shown for basal glycolysis (<b>E</b>), maximal glycolysis (<b>F</b>), and spare glycolytic capacity (<b>G</b>) by group. OCR and ECAR were used to calculate ATP production (<b>H</b>) and proton efflux rate (PER) leading to lactate production (anaerobic glycolysis) (<b>I</b>) and CO<sub>2</sub> (aerobic glycolysis) (<b>J</b>). Values are mean ± SEM (<b>A</b>), and individual values from experimental replicates (<b>B</b>–<b>G</b>) and calculated values (<b>H</b>–<b>J</b>) are shown with the line representing the mean. <span class="html-italic">n</span> = 12–24/group. * <span class="html-italic">p</span> &lt; 0.05, ** <span class="html-italic">p</span> &lt; 0.01, *** <span class="html-italic">p</span> &lt; 0.001, **** <span class="html-italic">p</span> &lt; 0.0001 by one-way ANOVA with Tukey’s multiple comparison test.</p>
Full article ">Figure 4
<p>Fatty acid (FA) uptake by carbon length and saturation. Representative images of BeWo with whole-cell regions of interest (ROIs) were taken by confocal live-cell imaging at 5, 20, 60, 120, and 180 min after adding BODIPY C12 (red), BODIPY C16 (green), and monounsaturated BODIPY C12 (MU C12, red) to media (<b>A</b>). BODIPY 505/515 neutral lipid counterstain was used to validate that lipid uptake occurred in BODIPY MU C12 experiments. The average relative fluorescent intensities were plotted over time to assess variation in kinetics (<b>B</b>). <span class="html-italic">n</span> = 3 biological replicates/group with 41–55 cells/group/time point imaged and analyzed. Values are mean ± SEM.</p>
Full article ">Figure 5
<p>Whole-cell fatty acid uptake in controls and high glucose-, high lipid-, and combination-exposed BeWo over time. Each media group’s whole-cell uptake of BODIPY C12 (<b>A</b>), BODIPY C16 (<b>B</b>), and BODIPY MU C12 (<b>C</b>) are shown over 180 min, and group comparisons demonstrate time- and FA-specific differences between exposure groups (<b>D</b>). <span class="html-italic">p</span> values and arrow noting direction of change for statistical significance compared to control uptake across time points are shown (<b>D</b>). <span class="html-italic">n</span> = 3/exposure group with 41–55 cells/group/time point analyzed. Values are mean ± SEM. Significant differences from control <span class="html-italic">p</span> &lt; 0.05 by one-way ANOVA with Tukey’s multiple comparison test.</p>
Full article ">Figure 6
<p>Effects of high glucose, high lipid, and combination exposure on the proportion of FA species in BeWo lipid droplets over time. Droplets were identified using the green channel fluorescence, BODIPY 505/515 (shown in A) or BODIPY C16 depending on the experimental design. Droplets were segmented using ImageJ particle analysis, as represented by BODIPY 505/515 (green) and BODIPY MU C12 (red) in BeWo imaged at 20 min (<b>A</b>). The proportion of droplet to whole-cell intensity was calculated, where 1 is the total fluorescence in the cell. This estimate of lipid droplet partitioning of individual FAs (BODIPY C12, C16, and MU C12) is illustrated over time in controls (<b>B</b>) and high glucose-, high lipid-, and combination-exposed BeWo (<b>C</b>). <span class="html-italic">n</span> = 3/exposure group with 41–55 cells/group/time point analyzed. Values are mean ± SEM. Significant differences (<span class="html-italic">p</span> &lt; 0.05) from the control group at each time point by one-way ANOVA with Tukey’s multiple comparison test are indicated with the white asterisk (*) within the column.</p>
Full article ">Figure 7
<p>Representative images of BODIPY C16 lipid droplets in BeWo highlight the variations in amount and area occupied by lipid droplets (<b>A</b>). Average lipid droplet counts (<b>B</b>) and relative areas of BODIPY C16 accumulation per cell area (<b>C</b>) are represented by bar graphs per group, over time. <span class="html-italic">n</span> = 3/exposure group with 41–55 cells/group/time point. Values are mean ± SEM. Significant differences from control: * <span class="html-italic">p</span> &lt; 0.05 by one-way ANOVA with Tukey’s multiple comparison test.</p>
Full article ">Figure 8
<p>BODIPY C16-containing lipid dynamics over time. Strategy for determining localization of lipid droplets based on their distance from the center of the cell relative to the average cell radius (<b>A</b>) and BODIPY C16 droplet distance (<b>B</b>,<b>C</b>). Representative images of BODIPY C16 droplet distribution at 20 min in control, high glucose, high lipid, and combination media groups (<b>D</b>). <span class="html-italic">n</span> = 3/exposure with 41–55 cells/group/time point. Values are mean ± SEM. * <span class="html-italic">p</span> &lt; 0.05 by one-way ANOVA with Tukey’s multiple comparisons test.</p>
Full article ">Figure 9
<p>Fatty acid uptake in primary human trophoblasts. Representative images and relative fluorescence for each BODIPY FA take up in primary isolated cytotrophoblasts 12 h after isolation (<b>A</b>) and 96 h after isolation, whereby they have formed a syncytium (<b>B</b>). <span class="html-italic">n</span> = 7 patients, 10 cells measured per time point per patient. Values are mean ± SEM.</p>
Full article ">
22 pages, 10937 KiB  
Article
Modular Nanotransporters Deliver Anti-Keap1 Monobody into Mouse Hepatocytes, Thereby Inhibiting Production of Reactive Oxygen Species
by Yuri V. Khramtsov, Alexey V. Ulasov, Andrey A. Rosenkranz, Tatiana A. Slastnikova, Tatiana N. Lupanova, Georgii P. Georgiev and Alexander S. Sobolev
Pharmaceutics 2024, 16(10), 1345; https://doi.org/10.3390/pharmaceutics16101345 - 21 Oct 2024
Viewed by 659
Abstract
Background/Objectives: The study of oxidative stress in cells and ways to prevent it attract increasing attention. Antioxidant defense of cells can be activated by releasing the transcription factor Nrf2 from a complex with Keap1, its inhibitor protein. The aim of the work was [...] Read more.
Background/Objectives: The study of oxidative stress in cells and ways to prevent it attract increasing attention. Antioxidant defense of cells can be activated by releasing the transcription factor Nrf2 from a complex with Keap1, its inhibitor protein. The aim of the work was to study the effect of the modular nanotransporter (MNT) carrying an R1 anti-Keap1 monobody (MNTR1) on cell homeostasis. Methods: The murine hepatocyte AML12 cells were used for the study. The interaction of fluorescently labeled MNTR1 with Keap1 fused to hrGFP was studied using the Fluorescence-Lifetime Imaging Microscopy–Förster Resonance Energy Transfer (FLIM-FRET) technique on living AML12 cells transfected with the Keap1-hrGFP gene. The release of Nrf2 from the complex with Keap1 and its levels in the cytoplasm and nuclei of the AML12 cells were examined using a cellular thermal shift assay (CETSA) and confocal laser scanning microscopy, respectively. The effect of MNT on the formation of reactive oxygen species was studied by flow cytometry using 6-carboxy-2′,7′-dichlorodihydrofluorescein diacetate. Results: MNTR1 is able to interact with Keap1 in the cytoplasm, leading to the release of Nrf2 from the complex with Keap1 and a rapid rise in Nrf2 levels both in the cytoplasm and nuclei, ultimately causing protection of cells from the action of hydrogen peroxide. The possibility of cleavage of the monobody in endosomes leads to an increase in the observed effects. Conclusions: These findings open up a new approach to specifically modulating the interaction of intracellular proteins, as demonstrated by the example of the Keap1-Nrf2 system. Full article
Show Figures

Figure 1

Figure 1
<p>The changes in the Nrf2 level in AML12 cells following MNT or sulforaphane addition. MNT<sub>R1</sub> or MNT<sub>0</sub> were added to AML12 cells for the indicated time. The fixed cells were stained by indirect immunofluorescence. Nrf2 was revealed by immunofluorescence (red); cell nuclei were stained with DAPI (blue). (<b>a</b>) Representative images of cells without any MNT addition (no additives); (<b>b</b>) cells after 2 h incubation with 10 µM of sulforaphane; (<b>c</b>,<b>d</b>) cells after incubation for indicated time with 500 nM of MNT<sub>R1</sub> and MNT<sub>0</sub>, respectively. Bar—10 µm.</p>
Full article ">Figure 2
<p>Kinetics of changing Nrf2 levels after the addition of MNT<sub>R1</sub> and MNT<sub>0</sub> for the cytoplasm (<b>a</b>) and nuclei (<b>b</b>), respectively. Data are presented as mean ± SE. * <span class="html-italic">p</span> &lt; 0.05, ** <span class="html-italic">p</span> &lt; 0.01, *** <span class="html-italic">p</span> &lt; 0.001.</p>
Full article ">Figure 3
<p>Analysis of the interaction between intracellular Keap1-hrGFP and MNT<sub>R1</sub>-AF568 after its addition to AML12 cells with the temporary expression of Keap1-hrGFP. Imaging mean fluorescence lifetimes of hrGFP, τ<sub>m</sub>, in the cell after one hour of incubation with 500 nM MNT<sub>R1</sub>-AF568 (<b>a</b>). Frequency distributions of mean fluorescence lifetimes of hrGFP, τ<sub>m</sub>, in cells that were not treated with MNT<sub>R1</sub>-AF568 (<b>b</b>), incubated for 15 min with 500 nM MNT<sub>R1</sub>-AF568 (<b>c</b>), and incubated for one hour with 500 nM MNT<sub>R1</sub>-AF568 (<b>d</b>). The curves were averaged over 5 to 15 cells. The black lines represent the average curves; the blue lines are a result of their fitting with Gaussian curves; and the red lines show the summation of the Gaussian curves.</p>
Full article ">Figure 4
<p>Studying the effect of MNT on the Nrf2 microenvironment by CETSA. (<b>a</b>) Examples of immunoblots of Nrf2 in complex with Keap1 (cell heating), active Nrf2 (the cell lysate with MNT<sub>R1</sub> heating), after 15 min of incubation of AML12 cells with 500 nM MNT<sub>R1</sub> (cell heating), after 15 min of incubation of AML12 cells with 500 nM MNT<sub>0</sub> (cell heating), and after 15 min of incubation of AML12 cells with 500 nM of MNT<sub>R1</sub> at 4 °C (cell heating). (<b>b</b>) Melting curves of Nrf2 in complex with Keap1 (blue curve), active Nrf2 (red curve), after 15 min of incubation of AML12 cells with 500 nM MNT<sub>R1</sub> (black curve), after 15 min of incubation of AML12 cells with 500 nM MNT<sub>0</sub> (green curve), and after 15 min of incubation of AML12 cells with 500 nM of MNT<sub>R1</sub> at 4 °C (brown curve). The data were obtained by a CETSA using immunoblotting with antibodies against Nrf2. The dependences are normalized to the average intensity of the band corresponding to Nrf2 at 37 °C. (<b>c</b>) Melting curves of Nrf2 in complex with Keap1 (blue curve), active Nrf2 (red curve), and incubation of AML12 cells with 500 nM MNT<sub>R1</sub> for 2 min (dark yellow curve), 5 min (wine curve), 10 min (magenta curve), and 15 min (black curve). The data were obtained by a CETSA assay using immunoblotting with antibodies against Nrf2. The dependences are normalized to the average intensity of the band corresponding to Nrf2 at 37 °C. (<b>d</b>) The dependence of the fraction of active Nrf2 at 37 °C on the incubation time of AML12 cells with 500 nM of MNT<sub>R1</sub> (black curve) or MNT<sub>clR1</sub> (red curve). The average values of ± standard error (n = 4–14) are provided.</p>
Full article ">Figure 5
<p>Effect of MNTs on ROS generation. The effect of pre-incubating AML12 cells with 500 nM MNT<sub>0</sub> for 15 min on cDCF fluorescence at different time points (1–6 h after adding MNT<sub>0</sub>) is shown in plot (<b>a</b>). The effect of pre-incubating AML12 cells with 500 nM MNT<sub>R1</sub> for 5, 10, and 15 min on cDCF fluorescence at different time points (1–6 h after adding MNT<sub>R1</sub>) is shown in plots (<b>b</b>), (<b>c</b>), and (<b>d</b>), respectively. The effect of pre-incubating AML12 cells with 500 nM MNT<sub>clR1</sub> for 5 min on cDCF fluorescence at different time points (1–6 h after adding MNT<sub>clR1</sub>) is shown in plot (<b>e</b>). Data are presented as mean ± SE (<span class="html-italic">n</span> = 6–18). The significance of the difference between groups with MNT addition and the control group (no MNT) is shown (** <span class="html-italic">p</span> &lt; 0.01, *** <span class="html-italic">p</span> &lt; 0.001, **** <span class="html-italic">p</span> &lt; 0.0001).</p>
Full article ">
25 pages, 2595 KiB  
Article
Appearance-Based Gaze Estimation as a Benchmark for Eye Image Data Generation Methods
by Dmytro Katrychuk and Oleg V. Komogortsev
Appl. Sci. 2024, 14(20), 9586; https://doi.org/10.3390/app14209586 - 21 Oct 2024
Viewed by 870
Abstract
Data augmentation is commonly utilized to increase the size and diversity of training sets for deep learning tasks. In this study, we propose a novel application of an existing image generation approach in the domain of realistic eye images that leverages data collected [...] Read more.
Data augmentation is commonly utilized to increase the size and diversity of training sets for deep learning tasks. In this study, we propose a novel application of an existing image generation approach in the domain of realistic eye images that leverages data collected from 40 subjects. This hybrid method combines the benefits of precise control over the image content provided by 3D rendering, while introducing the previously lacking photorealism and diversity into synthetic images through neural style transfer. We prove its general efficacy as a data augmentation tool for appearance-based gaze estimation when generated data are mixed with a sparse train set of real images. It improved the results for 39 out of 40 subjects, with an 11.22% mean and a 19.75% maximum decrease in gaze estimation error, achieving similar metrics for train and held-out subjects. We release our data repository of eye images with gaze labels used in this work for public access. Full article
(This article belongs to the Special Issue Latest Research on Eye Tracking Applications)
Show Figures

Figure 1

Figure 1
<p>The outline of the original EyeGAN approach.</p>
Full article ">Figure 2
<p>A schematic side view of the VOG data collection system (reproduced from [<a href="#B60-applsci-14-09586" class="html-bibr">60</a>] with the authors’ permission).</p>
Full article ">Figure 3
<p>The outline of the proposed eye image generation method. The original StarGANv2 is modified with SPADE+AdaIN normalization layers in the image decoder and is adapted for domain transfer between real and synthetic eye images.</p>
Full article ">Figure 4
<p>Comparison of synthetic data generation methods when used for data augmentation in appearance-based gaze estimation. The gaze estimation error improvement relative to the sparse training set of real images is summarized as a boxplot of a per-subject mean over 5 test folds.</p>
Full article ">Figure 5
<p>Comparison of data augmentation performance between train and test subjects for Blender and StarGANv2-SPADE+AdaIN methods. The plot format follows <a href="#applsci-14-09586-f004" class="html-fig">Figure 4</a>.</p>
Full article ">Figure A1
<p>Image examples with corresponding segmentation masks that were rendered using the modified Blender model.</p>
Full article ">Figure A2
<p>Real eye image examples (the fourth column) with corresponding segmentation masks that were computed at the first iteration of EyeGAN (the first column), at the second iteration of EyeGAN (the second column), and using the RITNet model (the third column).</p>
Full article ">Figure A3
<p>EyeGAN-RITNet generated images with good quality; each row is an input–output triplet (left—input segmentation mask; middle—a corresponding Blender render; right—EyeGAN-RITNet output).</p>
Full article ">Figure A4
<p>EyeGAN-RITNet generated images with quality issues; the image structure follows <a href="#applsci-14-09586-f0A3" class="html-fig">Figure A3</a>.</p>
Full article ">Figure A5
<p>StarGANv2-SPADE+AdaIN generated images with good subject appearance consistency; each row is an input–output triplet (left—input Blender render, middle—input VOG image for target subject appearance, right—StarGANv2-SPADE+AdaIN output).</p>
Full article ">Figure A6
<p>StarGANv2-SPADE+AdaIN generated images with bad subject appearance consistency; the image structure follows <a href="#applsci-14-09586-f0A5" class="html-fig">Figure A5</a>.</p>
Full article ">
Back to TopTop