Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (475)

Search Parameters:
Keywords = label-free imaging

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 4642 KiB  
Article
Enhanced Palatal Wound Healing with Leucocyte- and Platelet-Rich Fibrin After Free Gingival Graft Harvesting: A Prospective Randomized Controlled Clinical Trial
by Serap Gulsever and Sina Uckan
J. Clin. Med. 2025, 14(3), 1029; https://doi.org/10.3390/jcm14031029 - 6 Feb 2025
Viewed by 468
Abstract
Background/Objectives: Autogenous palatal free gingival graft (FGG) harvesting presents challenges for patients due to the increased risk of postoperative morbidity related to a second intraoral surgical wound that heals with secondary intention. This parallel-group, randomized, controlled, open-label trial aimed to evaluate the efficacy [...] Read more.
Background/Objectives: Autogenous palatal free gingival graft (FGG) harvesting presents challenges for patients due to the increased risk of postoperative morbidity related to a second intraoral surgical wound that heals with secondary intention. This parallel-group, randomized, controlled, open-label trial aimed to evaluate the efficacy of the application of leukocyte- and platelet-rich fibrin (L-PRF) membrane to the palatal donor site on wound healing, hemostasis, and pain control after FGG harvesting. Methods: Twenty-eight adult patients with insufficient attached gingiva underwent soft tissue augmentation using FGG harvested from the palate at the Department of Oral and Maxillofacial Surgery, Baskent University, Turkey. Patients were randomized to either an L-PRF group or a control group. In the L-PRF group, the L-PRF membrane was sutured to the donor sites, whereas in the control group, donor sites healed by secondary intention. Postoperative evaluations were conducted on days 1, 3, 5, and 7 and at weeks 2, 3, 4, 5, and 6. Donor sites were evaluated clinically for pain, burning sensation, bleeding, wound healing, and color match to adjacent tissues. Donor site wound areas were analyzed using digital images. Results: Two patients were excluded from the analysis due to loss of contact, leaving 26 (n = 13, n = 13) patients for analysis. Donor site pain and burning sensation were significantly lower in the L-PRF group compared to the control group during the first two postoperative weeks (p < 0.001). Bleeding was significantly lower in the L-PRF group on postoperative days 1 and 3 (p < 0.001). Clinical healing index scores were significantly higher in the L-PRF group at weeks 3 and 4 (p < 0.001). Additionally, palatal wound area reductions from baseline were significantly greater in the L-PRF group at all follow-up intervals (p < 0.001). Conclusions: The application of an L-PRF membrane to palatal donor wounds after FGG harvesting significantly reduces postoperative pain, decreases bleeding, and accelerates healing, providing a valuable autologous biomaterial for enhanced wound healing and improved patient comfort. Full article
(This article belongs to the Section Dentistry, Oral Surgery and Oral Medicine)
Show Figures

Figure 1

Figure 1
<p>Flow diagram of the study, L-PRF: leukocyte- and platelet-rich fibrin.</p>
Full article ">Figure 2
<p>(<b>a</b>,<b>b</b>) Calculation of the donor site wound area in pixels at baseline. (<b>c</b>,<b>d</b>) Calculation of the paper ruler area portion in pixels with an actual area of 0.2 cm<sup>2</sup> at baseline. (<b>e</b>–<b>i</b>) Calculation of the donor site wound area and ruler area in pixels on days 3, 7, 14, 21, and 28 postoperatively, respectively.</p>
Full article ">Figure 3
<p>(<b>a</b>) Donor site pain scores at follow-up intervals. (<b>b</b>) Donor site burning sensation scores at follow-up intervals. (<b>c</b>) Donor site bleeding scores at follow-up intervals. (<b>d</b>) Donor site color matching scores at follow-up intervals. (<b>e</b>) Donor site wound area percentage reductions relative to baseline at follow-up intervals. (<b>f</b>) Clinical healing index scores of donor site wounds at follow-up intervals.</p>
Full article ">
25 pages, 6944 KiB  
Article
Representation Learning of Multi-Spectral Earth Observation Time Series and Evaluation for Crop Type Classification
by Andrea González-Ramírez, Clement Atzberger, Deni Torres-Roman and Josué López
Remote Sens. 2025, 17(3), 378; https://doi.org/10.3390/rs17030378 - 23 Jan 2025
Viewed by 578
Abstract
Remote sensing (RS) spectral time series provide a substantial source of information for the regular and cost-efficient monitoring of the Earth’s surface. Important monitoring tasks include land use and land cover classification, change detection, forest monitoring and crop type identification, among others. To [...] Read more.
Remote sensing (RS) spectral time series provide a substantial source of information for the regular and cost-efficient monitoring of the Earth’s surface. Important monitoring tasks include land use and land cover classification, change detection, forest monitoring and crop type identification, among others. To develop accurate solutions for RS-based applications, often supervised shallow/deep learning algorithms are used. However, such approaches usually require fixed-length inputs and large labeled datasets. Unfortunately, RS images acquired by optical sensors are frequently degraded by aerosol contamination, clouds and cloud shadows, resulting in missing observations and irregular observation patterns. To address these issues, efforts have been made to implement frameworks that generate meaningful representations from the irregularly sampled data streams and alleviate the deficiencies of the data sources and supervised algorithms. Here, we propose a conceptually and computationally simple representation learning (RL) approach based on autoencoders (AEs) to generate discriminative features for crop type classification. The proposed methodology includes a set of single-layer AEs with a very limited number of neurons, each one trained with the mono-temporal spectral features of a small set of samples belonging to a class, resulting in a model capable of processing very large areas in a short computational time. Importantly, the developed approach remains flexible with respect to the availability of clear temporal observations. The signal derived from the ensemble of AEs is the reconstruction difference vector between input samples and their corresponding estimations, which are averaged over all cloud-/shadow-free temporal observations of a pixel location. This averaged reconstruction difference vector is the base for the representations and the subsequent classification. Experimental results show that the proposed extremely light-weight architecture indeed generates separable features for competitive performances in crop type classification, as distance metrics scores achieved with the derived representations significantly outperform those obtained with the initial data. Conventional classification models were trained and tested with representations generated from a widely used Sentinel-2 multi-spectral multi-temporal dataset, BreizhCrops. Our method achieved 77.06% overall accuracy, which is 6% higher than that achieved using original Sentinel-2 data within conventional classifiers and even 4% better than complex deep models such as OmnisCNN. Compared to extremely complex and time-consuming models such as Transformer and long short-term memory (LSTM), only a 3% reduction in overall accuracy was noted. Our method uses only 6.8k parameters, i.e., 400x fewer than OmnicsCNN and 27x fewer than Transformer. The results prove that our method is competitive in terms of classification performance compared with state-of-the-art methods while substantially reducing the computational load. Full article
(This article belongs to the Collection Sentinel-2: Science and Applications)
Show Figures

Figure 1

Figure 1
<p>Illustration of representation learning (RL) as a function <span class="html-italic">f</span>, mapping vectors from a dimensional space to a representation space.</p>
Full article ">Figure 2
<p>Example of an autoencoder architecture with mathematical definition as a function. In the present work, the reconstruction difference between the input and output is used as a representation and not the code itself.</p>
Full article ">Figure 3
<p>First level of the proposed workflow. A scene classification product provided by the European Space Agency (ESA) is used to mask out cloudy samples from a geographic point (pixel) shaped as a <math display="inline"><semantics> <mrow> <mi>T</mi> <mo>×</mo> <mi>B</mi> </mrow> </semantics></math> array.</p>
Full article ">Figure 4
<p>Proposed framework block diagram. The full methodology is composed of four main blocks: data preprocessing, model training, representation generation and evaluation.</p>
Full article ">Figure 5
<p>Dataset downloading process using the Google Earth Engine (GEE) database.</p>
Full article ">Figure 6
<p>Example of the expected output for positive and negative samples. The difference from the ensemble of autoencoders (AEs) constitutes the representations for the downstream task.</p>
Full article ">Figure 7
<p>Autoencoder (AE) training. Each autoencoder is trained with a finite set of individual spectral curves belonging to one of the crop types. The reconstructions from the <span class="html-italic">C</span> classes are used to calculate the difference vector across the ensemble that is the final set of representations.</p>
Full article ">Figure 8
<p>Inference workflow of the proposed framework. For each temporal set of cloud-free reflectance spectra, the average reconstruction difference vector is calculated for each of the <span class="html-italic">C</span> autoencoders (AEs) and concatenated to define the representations of this pixel.</p>
Full article ">Figure 9
<p>3D scatterplot of (<b>a</b>) S2 fixed-length time series (45 observations) and (<b>b</b>) representation over three principal components obtained by t-distributed Stochastic Neighbor Embedding (TSNE) only for visual interpretation.</p>
Full article ">Figure 10
<p>Overall accuracy (OA) of the random forest (RF), support vector machine (SVM), extreme gradient boosting (XGBoost) and fully connected network (FCN) trained with a variable percentage of training samples and using (i) representations (solid line) and (ii) original Sentinel-2 data (broken line).</p>
Full article ">Figure 11
<p>(<b>a</b>) True color image of the study area in 2017 and composites images generated by combining three random representations per map: (<b>b</b>) 9-64-30, (<b>c</b>) 59-84-81, (<b>d</b>) 30-11-141, (<b>e</b>) 45-66-57, (<b>f</b>) 20-10-32, (<b>g</b>) 5-142-83 and (<b>h</b>) 24-79-133.</p>
Full article ">Figure 12
<p>(<b>a</b>) Study area ground truth at field level (polygons), (<b>b</b>) representations-based fully connected network (FCN) pixel-wise classification (raster), (<b>c</b>) representations-based FCN field-based classification (polygons) and (<b>d</b>) map of correctly classified fields in green and misclassified fields in red.</p>
Full article ">Figure 12 Cont.
<p>(<b>a</b>) Study area ground truth at field level (polygons), (<b>b</b>) representations-based fully connected network (FCN) pixel-wise classification (raster), (<b>c</b>) representations-based FCN field-based classification (polygons) and (<b>d</b>) map of correctly classified fields in green and misclassified fields in red.</p>
Full article ">Figure A1
<p>Hyperparameters and quality indicators correlation matrix.</p>
Full article ">
14 pages, 2926 KiB  
Article
Portable Cell Tracking Velocimetry for Quantification of Intracellular Fe Concentration of Blood Cells
by Linh Nguyen T. Tran, Karla Mercedes Paz Gonzalez, Hyeon Choe, Xian Wu, Jacob Strayer, Poornima Ramesh Iyer, Maciej Zborowski, Jeffrey Chalmers and Jenifer Gomez-Pastora
Micromachines 2025, 16(2), 126; https://doi.org/10.3390/mi16020126 - 23 Jan 2025
Viewed by 556
Abstract
Hematological analysis is crucial for diagnosing and monitoring blood-related disorders. Nevertheless, conventional hematology analyzers remain confined to laboratory settings due to their high cost, substantial space requirements, and maintenance needs. Herein, we present a portable cell tracking velocimetry (CTV) device for the precise [...] Read more.
Hematological analysis is crucial for diagnosing and monitoring blood-related disorders. Nevertheless, conventional hematology analyzers remain confined to laboratory settings due to their high cost, substantial space requirements, and maintenance needs. Herein, we present a portable cell tracking velocimetry (CTV) device for the precise measurement of the magnetic susceptibility of biological entities at the single-cell level, focusing on red blood cells (RBCs) in this work. The system integrates a microfluidic channel positioned between permanent magnets that generate a well-defined magnetic field gradient (191.82 TA/mm2). When the cells are injected into the chamber, their particular response to the magnetic field is recorded and used to estimate their properties and quantify their intracellular hemoglobin (Hb) concentration. We successfully track over 400 RBCs per condition using imaging and trajectory analysis, enabling detailed characterizations of their physical and magnetic properties. A comparison of the mean corpuscular hemoglobin measurements revealed a strong correlation between our CTV system and standard ultraviolet–visible (UV-Vis) spectrophotometry (23.1 ± 5.8 pg vs. 22.4 ± 3.9 pg, p > 0.05), validating the accuracy of our measurements. The system’s single-cell resolution reveals population distributions unobtainable through conventional bulk analysis methods. Thus, this portable CTV technology provides a rapid, label-free approach for magnetic cell characterization, offering new possibilities for point-of-care hematological analysis and field-based research applications. Full article
(This article belongs to the Special Issue Research Progress of Microfluidic Bioseparation and Bioassay)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the CTV system showing the optical setup, measurement chamber dimensions, and sample trajectory analysis. Magnets are located inside the measurement chamber and an onset is provided along with the magnetic field intensity in the measurement area.</p>
Full article ">Figure 2
<p>Representative particle trajectories of polystyrene beads in MnCl<sub>2</sub> solutions at different concentrations: (<b>A</b>) 0.025 M, (<b>B</b>) 0.05 M, and (<b>C</b>) 0.075 M.</p>
Full article ">Figure 3
<p>COMSOL-simulated magnetic energy gradient. The red rectangle in the channel presents the ROI used for tracing the particles in our experiments.</p>
Full article ">Figure 4
<p>Representative cell trajectories of (<b>A</b>) oxyhemoglobin-containing RBCs and (<b>B</b>) methemoglobin-containing RBCs inside the CTV device. Note the different trajectories of both cell types, with the non-magnetic (diamagnetic) oxyhemoglobin cells moving only downwards due to gravity and with the paramagnetic metHb-RBCs moving in both directions (horizontally and vertically) due to the effect of the magnetic and gravitational fields.</p>
Full article ">Figure 5
<p>Distribution plots showing the relationship between MCH content (pg of Hb per cell), magnetically induced velocity (mm/s), and settling velocity (mm/s) measured from a representative RBC sample. Top panel: MCH distribution with cumulative frequency. Bottom panels: scatter plot of settling velocity versus magnetically induced velocity (<b>left</b>) and corresponding settling velocity distribution (<b>right</b>).</p>
Full article ">Figure 6
<p>MCH content analysis by CTV and UV–Vis spectrophotometry. (<b>A</b>) Individual MCH values across donors HD1–HD4. (<b>B</b>) Statistical comparison of mean MCH values (pg) between methods. Error bars represent standard deviation; ns: not significant (paired <span class="html-italic">t</span>-test, <span class="html-italic">p</span> &gt; 0.05).</p>
Full article ">
18 pages, 7563 KiB  
Article
Quantitative Analysis Using PMOD and FreeSurfer for Three Types of Radiopharmaceuticals for Alzheimer’s Disease Diagnosis
by Hyun Jin Yoon, Daye Yoon, Sungmin Jun, Young Jin Jeong and Do-Young Kang
Algorithms 2025, 18(2), 57; https://doi.org/10.3390/a18020057 - 21 Jan 2025
Viewed by 516
Abstract
In amyloid brain PET, after parcellation using the finite element method (FEM)-based algorithm FreeSurfer and voxel-based algorithm PMOD, SUVr examples can be extracted and compared. This study presents the classification SUVr threshold in PET images of F-18 florbetaben (FBB), F-18 flutemetamol (FMM), and [...] Read more.
In amyloid brain PET, after parcellation using the finite element method (FEM)-based algorithm FreeSurfer and voxel-based algorithm PMOD, SUVr examples can be extracted and compared. This study presents the classification SUVr threshold in PET images of F-18 florbetaben (FBB), F-18 flutemetamol (FMM), and F-18 florapronol (FPN) and compares and analyzes the classification performance according to computational algorithm in each brain region. PET images were co-registered after the generated MRI was registered with standard template information. Using MATLAB script, SUVr was calculated using the built-in parcellation number labeled in the brain region. PMOD and FreeSurfer with different algorithms were used to load the PET image, and after registration in MRI, it was normalized to the MRI template. The volume and SUVr of the individual gray matter space region were calculated using an automated anatomical labeling atlas. The SUVr values of eight regions of the frontal cortex (FC), lateral temporal cortex (LTC), mesial temporal cortex (MTC), parietal cortex (PC), occipital cortex (OC), anterior and posterior cingulate cortex (GCA, GCP), and composite were calculated. After calculating the correlation of SUVr using the FreeSurfer and PMOD algorithms and calculating the AUC for amyloid-positive/negative subjects, the classification ability was calculated, and the SVUr threshold was calculated using the Youden index. The correlation coefficients of FreeSurfer and PMOD SUVr calculations of the eight regions of the brain cortex were FBB (0.95), FMM (0.94), and FPN (0.91). The SUVr threshold was SUVr(LTC,min) = 1.264 and SUVr(THA,max) = 1.725 when calculated using FPN-FreeSurfer, and SUVr(MTC,min) = 1.093 and SUVr(MCT,max) = 1.564 when calculated using FPN-PMOD. The AUC comparison showed that there was no statistically significant difference (p > 0.05) in the SUVr classification results using the three radiopharmaceuticals, specifically for the LTC and OC regions in the PMOD analysis, and the LTC and PC regions in the FreeSurfer analysis. The SUVr calculation using PMOD (voxel-based algorithm) has a strong correlation with the calculation using FreeSurfer (FEM-based algorithm); therefore, they complement each other. Quantitative classification analysis with high accuracy is possible using the suggested SUVr threshold. The SUVr classification performance was good in the order of FMM, FBB, and FPN, and showed a good classification performance in the LTC region regardless of the type of radiotracer and analysis algorithm. Full article
(This article belongs to the Special Issue Algorithms in Data Classification (2nd Edition))
Show Figures

Figure 1

Figure 1
<p>The negative (<b>A</b>) and positive (<b>B</b>) FBB brain PET images. Cropped T1-weighted MRI (<b>top</b>), cropped PET in MRI space (<b>bottom</b>) are shown.</p>
Full article ">Figure 2
<p>Representation of the parcellation and segmentation images using PMOD (<b>A</b>), cortical parcellation, and sub-cortical segmentation images using FreeSurfer (<b>B</b>), AAL atlas (<b>C</b>), and Desikan–Killiany atlas (<b>D</b>).</p>
Full article ">Figure 3
<p>A box plot comparing the normal and patient groups by extracting the SUVr of FBB, FMM, and FPN PET from nine regions using PMOD. Each value was calculated for FC (<b>A</b>), LTC (<b>B</b>), MTC (<b>C</b>), PC (<b>D</b>), OC (<b>E</b>), GCA (<b>F</b>), GCP (<b>G</b>), PQ (<b>H</b>), and composite (<b>I</b>) regions.</p>
Full article ">Figure 4
<p>A box plot comparing the normal and patient groups by extracting the SUVr of FBB, FMM, and FPN PET from nine regions using FreeSurfer Each value was calculated for FC (<b>A</b>), LTC (<b>B</b>), MTC (<b>C</b>), PC (<b>D</b>), OC (<b>E</b>), GCA (<b>F</b>), GCP (<b>G</b>), PQ (<b>H</b>), and composite (<b>I</b>) regions.</p>
Full article ">Figure 5
<p>Box plot comparing the SUVr calculated in the composite region using PMOD of FBB (<b>A</b>–<b>C</b>), FMM (<b>D</b>–<b>F</b>), FPN (<b>G</b>–<b>I</b>) brain PET between the normal and patient groups. The distribution of SUVr in the composite region is presented as a box plot between normal and patient groups by dividing according to age (<b>B</b>,<b>E</b>,<b>H</b>) and sex (<b>C</b>,<b>F</b>,<b>I</b>). Each value was calculated for FC, LTC, MTC, PC, OC, GCA, GCP, PQ, and composite regions.</p>
Full article ">Figure 6
<p>Box plot comparing SUVr calculated in the composite region using FreeSurfer of FBB (<b>A</b>–<b>C</b>), FMM (<b>D</b>–<b>F</b>), and FPN (<b>G</b>–<b>I</b>) brain PET between the normal and patient groups. The distribution of SUVr in the composite region is presented as a box plot between the normal and patient groups by dividing according to age (<b>B</b>,<b>E</b>,<b>H</b>) and sex (<b>C</b>,<b>F</b>,<b>I</b>).</p>
Full article ">Figure 7
<p>Scatterplot of the FBB brain PET SUVr calculated using PMOD and FreeSurfer algorithms. The <span class="html-italic">x</span> axis is FreeSurfer and the <span class="html-italic">y</span> axis is the FBB brain PET SUVr calculated using the FreeSurfer algorithm.</p>
Full article ">Figure 8
<p>Scatterplot of FMM brain PET SUVr calculated using PMOD and FreeSurfer algorithms. The <span class="html-italic">x</span> axis is FreeSurfer and the <span class="html-italic">y</span> axis is the FMM brain PET SUVr calculated using the FreeSurfer algorithm.</p>
Full article ">Figure 9
<p>Scatterplot of FPN brain PET SUVr calculated using PMOD and FreeSurfer algorithms. FPN brain PET SUVr was calculated using the FreeSurfer algorithm on the <span class="html-italic">x</span> axis and FreeSurfer on the <span class="html-italic">y</span> axis.</p>
Full article ">Figure 10
<p>ROC comparison calculated using FreeSurfer (<b>A</b>,<b>C</b>,<b>E</b>) and PMOD (<b>B</b>,<b>D</b>,<b>F</b>) algorithms in FBB (<b>A</b>,<b>B</b>), FMM (<b>C</b>,<b>D</b>), and FPN (<b>E</b>,<b>F</b>) brain PET SUVr. ROCs were calculated and compared in the FC, PC, OC, GCP, PQ, and composite regions.</p>
Full article ">
23 pages, 7031 KiB  
Article
Fluorescence Lifetime Endoscopy with a Nanosecond Time-Gated CAPS Camera with IRF-Free Deep Learning Method
by Pooria Iranian, Thomas Lapauw, Thomas Van den Dries, Sevada Sahakian, Joris Wuts, Valéry Ann Jacobs, Jef Vandemeulebroucke, Maarten Kuijk and Hans Ingelberts
Sensors 2025, 25(2), 450; https://doi.org/10.3390/s25020450 - 14 Jan 2025
Viewed by 643
Abstract
Fluorescence imaging has been widely used in fields like (pre)clinical imaging and other domains. With advancements in imaging technology and new fluorescent labels, fluorescence lifetime imaging is gradually gaining recognition. Our research department is developing the tauCAMTM, based on the [...] Read more.
Fluorescence imaging has been widely used in fields like (pre)clinical imaging and other domains. With advancements in imaging technology and new fluorescent labels, fluorescence lifetime imaging is gradually gaining recognition. Our research department is developing the tauCAMTM, based on the Current-Assisted Photonic Sampler, to achieve real-time fluorescence lifetime imaging in the NIR (700–900 nm) region. Incorporating fluorescence lifetime into endoscopy could further improve the differentiation of malignant and benign cells based on their distinct lifetimes. In this work, the capabilities of an endoscopic lifetime imaging system are demonstrated using a rigid endoscope involving various phantoms and an IRF-free deep learning-based method with only 6-time points. The results show that this application’s fluorescence lifetime image has better lifetime uniformity and precision with 6-time points than the conventional methods. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the FLT endoscopy imaging.</p>
Full article ">Figure 2
<p>(<b>a</b>) Intensity pattern of the endoscope illumination through the FoV at WD of 3 cm, and (<b>b</b>) the 1D pattern through the mentioned red line in the diagonal direction, which has a Gaussian distribution. (<b>c</b>) The orange square marks the location of strong illumination, with its corresponding IRF showing a clear signal. (<b>d</b>) The green square indicates a region where the endoscope attenuates the light intensity, corresponding to a noise-dominated IRF.</p>
Full article ">Figure 3
<p>Resolution analysis of the FLT endoscopy system based on resolution test target USAF 1951.</p>
Full article ">Figure 4
<p>Topology of FLTCNN to analyze mono-exponential fluorescence decays. The details of hyperparameters in each layer in parenthesis represent the number of filters, and the kernel size, respectively. The input is an image stack of (128,128,6). The architecture of SimiResBlock, and DownSampleBlock (consists of 4, 2D convolutional layers with decrementing filter sizes) are shown with a dashed box. The BN and the ReLU are added after convolutional layers.</p>
Full article ">Figure 5
<p>Synthetic training data generation flow for mono-exponential fluorescence signal model.</p>
Full article ">Figure 6
<p>(<b>a</b>) MAE graph of training/validation vs. epochs. (<b>b</b>) The MAE of predicted results of the testing datasets. (<b>c</b>) The mean value of MAE for a lifetime is under different conditions. The SNR takes the value between 20 to 1000 for A = 10, 20, 50, and 100. The blue area denotes the lifetime range of training data. (<b>d</b>) t-SNE visualization was obtained via the last activation map before the down-sampling block, where each point represented a TPSF voxel and was assigned a randomized lifetime value.</p>
Full article ">Figure 7
<p>FLT image of a uniform ICG-equivalent phantom predicted by (<b>a</b>) Lmfit (Levenberg–Marquardt), (<b>b</b>) FLTCNN in the macroscopic wide-field regime. (<b>c</b>) Shows the normalized fluorescence intensity of the phantom. (<b>d</b>) FLT histogram of the ICG uniform phantom processed with Lmfit and FLTCNN.</p>
Full article ">Figure 8
<p>(<b>a</b>,<b>b</b>) FLT and intensity images, and (<b>c</b>) FLT histogram of the ICG uniform phantom captured by FLT endoscopy system and processed with FLTCNN algorithm.</p>
Full article ">Figure 9
<p>(<b>a</b>,<b>b</b>) FLT and intensity images of the uniform ICG phantom under an angle.</p>
Full article ">Figure 10
<p>(<b>a</b>) Lmfit analysis in the macroscopic regime, (<b>b</b>) Lmfit analysis of 6-time points in the endoscopy regime, (<b>c</b>) FLTCNN analysis in the endoscopy regime, and (<b>d</b>) fluorescence intensity image of the concentration ICG phantom, Quel Imaging. (<b>e</b>–<b>g</b>) histogram of each well related to each approach to predict lifetime.</p>
Full article ">Figure 11
<p>FLT images of ICG-equivalent phantoms (distortion, coin, and vessel, Quel Imaging) analyzed with (<b>a</b>–<b>c</b>) Lmfit full decay, (<b>d</b>–<b>f</b>) Lmfit 6-data point, (<b>g</b>–<b>i</b>) FLTCNN, and (<b>j</b>–<b>l</b>) fluorescence intensity.</p>
Full article ">Figure 12
<p>The lifetime distribution of the ICG distortion, coin, and vessel phantoms calculated by (<b>a</b>) Lmfit analysis in the macroscopic regime, (<b>b</b>) Lmfit analysis of 6-time points in the endoscopy regime, and (<b>c</b>) FLTCNN analysis in the endoscopy regime.</p>
Full article ">Figure 13
<p>(<b>a</b>) Lmfit analysis in the macroscopic regime, (<b>b</b>) Lmfit analysis of 6-time points in the endoscopy regime, (<b>c</b>) FLTCNN analysis in the endoscopy regime, and (<b>d</b>) fluorescence intensity image of QUEL mixed phantoms containing ICG in ST01/LU02 and OTL38 in ST01/LU02. (<b>e</b>–<b>g</b>) histogram of each well related to each approach to predict lifetime.</p>
Full article ">
20 pages, 3854 KiB  
Article
Fluorescence Lifetime Imaging of NAD(P)H in Patients’ Lymphocytes: Evaluation of Efficacy of Immunotherapy
by Diana V. Yuzhakova, Daria A. Sachkova, Anna V. Izosimova, Konstantin S. Yashin, Gaukhar M. Yusubalieva, Vladimir P. Baklaushev, Artem M. Mozherov, Vladislav I. Shcheslavskiy and Marina V. Shirmanova
Cells 2025, 14(2), 97; https://doi.org/10.3390/cells14020097 - 10 Jan 2025
Viewed by 601
Abstract
Background: The wide variability in clinical responses to anti-tumor immunotherapy drives the search for personalized strategies. One of the promising approaches is drug screening using patient-derived models composed of tumor and immune cells. In this regard, the selection of an appropriate in vitro [...] Read more.
Background: The wide variability in clinical responses to anti-tumor immunotherapy drives the search for personalized strategies. One of the promising approaches is drug screening using patient-derived models composed of tumor and immune cells. In this regard, the selection of an appropriate in vitro model and the choice of cellular response assay are critical for reliable predictions. Fluorescence lifetime imaging microscopy (FLIM) is a powerful, non-destructive tool that enables direct monitoring of cellular metabolism on a label-free basis with a potential to resolve metabolic rearrangements in immune cells associated with their reactivity. Objective: The aim of the study was to develop a patient-derived glioma explant model enriched by autologous peripheral lymphocytes and explore FLIM of the redox-cofactor NAD(P)H in living lymphocytes to measure the responses of the model to immune checkpoint inhibitors. Methods: The light microscopy, FLIM of NAD(P)H and flow cytometry were used. Results: The results demonstrate that the responsive models displayed a significant increase in the free NAD(P)H fraction α1 after treatment, associated with a shift towards glycolysis due to lymphocyte activation. The non-responsive models exhibited no alterations or a decrease in the NAD(P)H α1 after treatment. The FLIM data correlated well with the standard assays of immunotherapy drug response in vitro, including morphological changes, the T-cells activation marker CD69, and the tumor cell proliferation index Ki67. Conclusions: The proposed platform that includes tumor explants co-cultured with lymphocytes and the NAD(P)H FLIM assay represents a promising solution for the patient-specific immunotherapeutic drug screening. Full article
(This article belongs to the Section Cellular Metabolism)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Characteristics of G-EXP-L model. (<b>A</b>) Phase contrast microscopy of patient-derived G-EXP-L model on days 4, 9, and 14 of cultivation. The components of the model are shown by the numerated arrows: 1—adherent tumor fragment, 2—adherent tumor cells, 3—lymphocytes. (<b>B</b>) Expression of the activation markers CD69 and CD25 in live CD4+ and CD8+ T-cells before and after co-culturing with glioma explants. Dot plots show the measurements for the individual patients (dots), and horizontal lines connect the values for the same patient before and on day 14 of co-culturing. (<b>C</b>) Expression of proliferation marker Ki67 in tumor explants before and on day 14 of co-culturing. Dot plots show the measurements for individual patients (dots), and horizontal lines connect the values for the same patient before and after co-culturing. Statistics: paired Student’s <span class="html-italic">t</span>-test. * Significant difference, <span class="html-italic">p</span> ≤ 0.05.</p>
Full article ">Figure 2
<p>FLIM of NAD(P)H in lymphocytes before and after co-culturing with glioma explants. (<b>A</b>) Representative FLIM images of lymphocytes before and after co-culturing. The long lifetime component of NAD(P)H τ<sub>2</sub>, the relative contribution of free NAD(P)H α<sub>1</sub>, and the mean lifetime τ<sub>m</sub> are shown. (<b>B</b>) Coupled comparisons of fluorescence decay parameters (τ<sub>2</sub>, α<sub>1</sub>, τ<sub>m</sub>) before and after co-culturing. Dots are the measurements for individual patients. Coupled comparisons of NAD(P)H-α<sub>1</sub> before and after co-culturing. Dots are the mean values for individual patients. Lines connect the values before and after co-culturing for the same patient. Statistics: paired Student’s <span class="html-italic">t</span>-test. * Significant difference, <span class="html-italic">p</span> ≤ 0.05.</p>
Full article ">Figure 3
<p>Phase contrast microscopy of patient-derived G-EXP-L model after treatment with immune checkpoint inhibitors on the days 4, 9, and 14 of cultivation. Examples of the morphological response: (<b>A</b>)—“no response” (patient G27, anti-CTLA-4 treatment), (<b>B</b>)—“partial response” (patient G20, anti-PD-1 treatment), (<b>C</b>)—“response” (patient G30, combination) treatment. Bars are indicated in the images.</p>
Full article ">Figure 4
<p>Expression of the activation marker CD69 in CD8+ or CD4+ T-lymphocytes in the G-EXP-L models after immunotherapy with anti-CTLA-4 (<b>A</b>), anti-PD-1 (<b>B</b>), or their combination (<b>C</b>). Dot plots show measurements for individual patients (dots) and SEM (horizontal lines). The red boxes indicate the cases of a significant rise in the percentage of CD69+ cells, either in the CD8+ T-cell subset or both CD8+ and CD4+ T-cells. Statistics: Mann–Whitney U test. * Significant difference, <span class="html-italic">p</span> ≤ 0.05.</p>
Full article ">Figure 5
<p>Expression of proliferative index Ki67 of glioma cells in the G-EXP-L models after immunotherapy with anti-CTLA-4 (<b>A</b>), anti-PD-1 (<b>B</b>) or their combination (<b>C</b>). Dot plots show measurements for individual patients (dots) and SEM (horizontal lines). The red boxes indicate the cases of a significant decrease in the percentage of Ki67+ glioma cells. Statistics: Mann–Whitney U test. * Significant difference, <span class="html-italic">p</span> ≤ 0.05.</p>
Full article ">Figure 6
<p>FLIM of NAD(P)H in lymphocytes from the G-EXP-L models after anti-CTLA-4, anti-PD-1 or combined (anti-CTLA-4 + anti-PD-1) treatment. (<b>A</b>) Representative FLIM images of lymphocytes from responding (patient G30) and non-responding (patient G27) models. The relative amplitude of free NAD(P)H α<sub>1</sub> is shown in the untreated and treated T cells. Scale bar is indicated on the images. (<b>B</b>) Quantification of NAD(P)H α<sub>1</sub> for individual patient-derived models. The graphs display the mean and SD (horizontal lines). Dots are the measurements for individual cells. The red boxes indicate the cases of a significant rise in NAD(P)H α<sub>1</sub>. Statistics: Student’s <span class="html-italic">t</span>-test. * Significant difference, <span class="html-italic">p</span> ≤ 0.05.</p>
Full article ">Scheme 1
<p>The experimental workflow.</p>
Full article ">
24 pages, 1906 KiB  
Article
Towards Efficient Object Detection in Large-Scale UAV Aerial Imagery via Multi-Task Classification
by Shuo Zhuang, Yongxing Hou and Di Wang
Drones 2025, 9(1), 29; https://doi.org/10.3390/drones9010029 - 5 Jan 2025
Viewed by 555
Abstract
Achieving rapid and effective object detection in large-scale unmanned aerial vehicle (UAV) images presents a challenge. Existing methods typically split the original large UAV image into overlapping patches and perform object detection on each image patch. However, the extensive object-free background areas in [...] Read more.
Achieving rapid and effective object detection in large-scale unmanned aerial vehicle (UAV) images presents a challenge. Existing methods typically split the original large UAV image into overlapping patches and perform object detection on each image patch. However, the extensive object-free background areas in large-scale aerial imagery reduce detection efficiency. To address this issue, we propose an efficient object detection approach for large-scale UAV aerial imagery via multi-task classification. Specifically, we develop a lightweight multi-task classification (MTC) network to efficiently identify background areas. Our method leverages bounding box label information to construct a salient region generation branch. Then, to improve the training process of the classification network, we design a multi-task loss function to optimize the parameters of the multi-branch network. Furthermore, we introduce an optimal classification threshold strategy to balance detection speed and accuracy. Our proposed MTC network can rapidly and accurately determine whether an aerial image patch contains objects, and it can be seamlessly integrated with existing detectors without the need for retraining. We conduct experiments on three datasets to verify the effectiveness and efficiency of our classification-driven detection method, including the DOTA v1.0, DOTA v2.0, and ASDD datasets. In the large-scale UAV images and ASDD dataset, our proposed method increases the detection speed by more than 30% and 130%, respectively, while maintaining good object detection performance. Full article
Show Figures

Figure 1

Figure 1
<p>Different strategies’ object detection in large-scale UAV aerial imagery. (<b>a</b>) An original large-scale aerial image from the DOTA v2.0 dataset. (<b>b</b>) Image patches. (<b>c</b>) Four different detection strategies. (1) Detecting each image patch individually, (2) adding a classification branch to the detector to determine whether an image patch contains objects, (3) using a classifier to first determine whether the objects are present before performing detection, and (4) our proposed classification-driven detector: determining whether the objects are present before detection with multi-task optimization.</p>
Full article ">Figure 2
<p>Overview of the proposed multi-task classification (MTC) network for efficient object detection in large-scale UAV aerial imagery.</p>
Full article ">Figure 3
<p>The generation of Gaussian distribution-based salient regions: (<b>a</b>) example of salient region generation for a ship in an image patch from the ASDD dataset; (<b>b</b>) example of salient region generation for planes in an image patch from the DOTA v1.0 dataset.</p>
Full article ">Figure 4
<p>Examples of detection results based on a binary classification model and a multi-task classification model: (<b>a</b>) detection results of different classification-based detectors in the DOTA dataset; (<b>b</b>) detection results of different classification-based detectors in the ASDD dataset. (1), (2), (3), (4), (5), and (6) represent different image examples. The red boxes in the images indicate the object bounding boxes.</p>
Full article ">Figure 4 Cont.
<p>Examples of detection results based on a binary classification model and a multi-task classification model: (<b>a</b>) detection results of different classification-based detectors in the DOTA dataset; (<b>b</b>) detection results of different classification-based detectors in the ASDD dataset. (1), (2), (3), (4), (5), and (6) represent different image examples. The red boxes in the images indicate the object bounding boxes.</p>
Full article ">Figure 5
<p>Results on large-scale aerial images. (<b>a</b>) An aerial image of urban areas and corresponding detection results. (<b>b</b>) An aerial image of inshore areas and corresponding detection results.</p>
Full article ">Figure 6
<p>Visualization of feature maps from two different classifiers: binary classifier and MTC network. (<b>a</b>–<b>c</b>) are from the DOTA v1.0 dataset. (<b>d</b>,<b>e</b>) are from the ASDD dataset. In the classification results, orange text represents false classification and blue text represents true classification.</p>
Full article ">Figure 7
<p>The speed–accuracy trade-off curve in the ASDD dataset.</p>
Full article ">
24 pages, 5556 KiB  
Article
Differential Mitochondrial Redox Responses to the Inhibition of NAD+ Salvage Pathway of Triple Negative Breast Cancer Cells
by Jack Kollmar, Junmei Xu, Diego Gonzalves, Joseph A. Baur, Lin Z. Li, Julia Tchou and He N. Xu
Cancers 2025, 17(1), 7; https://doi.org/10.3390/cancers17010007 - 24 Dec 2024
Viewed by 799
Abstract
Background/Objectives: Cancer cells rely on metabolic reprogramming that is supported by altered mitochondrial redox status and an increased demand for NAD+. Over expression of Nampt, the rate-limiting enzyme of the NAD+ biosynthesis salvage pathway, is common in breast cancer [...] Read more.
Background/Objectives: Cancer cells rely on metabolic reprogramming that is supported by altered mitochondrial redox status and an increased demand for NAD+. Over expression of Nampt, the rate-limiting enzyme of the NAD+ biosynthesis salvage pathway, is common in breast cancer cells, and more so in triple negative breast cancer (TNBC) cells. Targeting the salvage pathway has been pursued for cancer therapy. However, TNBC cells have heterogeneous responses to Nampt inhibition, which contributes to the diverse outcomes. There is a lack of imaging biomarkers to differentiate among TNBC cells under metabolic stress and identify which are responsive. We aimed to characterize and differentiate among a panel of TNBC cell lines under NAD-deficient stress and identify which subtypes are more dependent on the NAD salvage pathway. Methods: Optical redox imaging (ORI), a label-free live cell imaging microscopy technique was utilized to acquire intrinsic fluorescence intensities of NADH and FAD-containing flavoproteins (Fp) thus the mitochondrial redox ratio Fp/(NADH + Fp) in a panel of TNBC cell lines. Various fluorescence probes were then added to the cultures to image the mitochondrial ROS, mitochondrial membrane potential, mitochondrial mass, and cell number. Results: Various TNBC subtypes are sensitive to Nampt inhibition in a dose- and time-dependent manner, they have differential mitochondrial redox responses; furthermore, the mitochondrial redox indices linearly correlated with mitochondrial ROS induced by various doses of a Nampt inhibitor. Moreover, the changes in the redox indices correlated with growth inhibition. Additionally, the redox state was found fully recovered after removing the Nampt inhibitor. Conclusions: This study supports the utility of ORI in rapid metabolic phenotyping of TNBC cells under NAD-deficient stress to identify responsive cells and biomarkers of treatment response, facilitating combination therapy strategies. Full article
(This article belongs to the Section Methods and Technologies Development)
Show Figures

Figure 1

Figure 1
<p>The critical role of NAD as a co-enzyme and co-substrate (<b>A</b>) and experimental schematic (<b>B</b>), where NAM stands for nicotinamide, NMN for nicotinamide mononucleotide, TCA for tricarboxylic acid cycle, LDH for lactate dehydrogenase.</p>
Full article ">Figure 2
<p>Typical images of MDA-MB-468 cells subjected to various imaging. (<b>A</b>–<b>E</b>) white light image and raw images of Fp and NADH, mitochondrial membrane potential (MMP) represented by TMRE intensity, and mitochondrial mass represented by MitoView Green intensity, respectively, for control. (<b>F</b>–<b>J</b>) display the processed images of those shown in (<b>A</b>–<b>E</b>). (<b>K</b>–<b>O</b>) the processed images of MDA-MB-468 cells treated with 50 nM FK866 for 48 h. The intensity bars for the raw images are in arbitrary unit and are set for better visualization of signal dynamic range. The intensity bars for the pseudocolored images represent the pixel values of the corresponding images with the redder color indicating higher values. The numbers in white color in the processed images are the means and standard deviations of the respective images. (<b>P</b>) the mean values and standard deviations of the redox indices of images shown in (<b>F</b>–<b>H</b>) (control) and (<b>K</b>–<b>M</b>) (treated); (<b>Q</b>) the mean values and standard deviations of the MMP shown in images (<b>I</b>,<b>N</b>); (<b>R</b>) the mean values and standard deviations of the mitochondrial mass of images shown in (<b>J</b>,<b>O</b>). The error bars represent the standard deviations of the pixel values in the corresponding images.</p>
Full article ">Figure 3
<p>Dose-dependent mitochondrial redox responses observed in the TNBC cells. (<b>A</b>) Redox responses of MDA-MB-468 cells; (<b>B</b>) Redox responses of MDA-MB-436 cells; (<b>C</b>) Redox responses of MDA-MB-453 cells treated with 0–100 nM of FK866 for 48 h. Bars: mean ± SD, black circles indicating individual dishes. *, <span class="html-italic">p</span> &lt; 0.05, **, <span class="html-italic">p</span> &lt; 0.01, ***, <span class="html-italic">p</span> &lt; 0.001, ****, <span class="html-italic">p</span> &lt; 0.0001.</p>
Full article ">Figure 4
<p>The redox responses represented as the percentage change from baseline for Fp, NADH, and the redox ratio in five human TNBC cell lines treated with 1 nM FK866 for 48 h. Bars: mean ± SD, black circles indicating individual dishes. Asterisks indicate the comparison between the redox indices of control (untreated) and treated cells. *, <span class="html-italic">p</span> &lt; 0.05, **, <span class="html-italic">p</span> &lt; 0.01, ***, <span class="html-italic">p</span> &lt; 0.001, ****, <span class="html-italic">p</span> &lt; 0.0001. Note: the data for HCC1806 were extracted from our previous report [<a href="#B35-cancers-17-00007" class="html-bibr">35</a>].</p>
Full article ">Figure 5
<p>Heatmaps of the adjusted <span class="html-italic">p</span> values for multiple comparisons between cell lines of the corresponding redox index changes shown in <a href="#cancers-17-00007-f004" class="html-fig">Figure 4</a> and <a href="#cancers-17-00007-t002" class="html-table">Table 2</a>.</p>
Full article ">Figure 6
<p>Nampt inhibitor dose-dependent redox changes observed in E0771 cells after 20 h treatment with (<b>A</b>) various doses of FK866 and (<b>B</b>) various doses of GMX1778. Bars: mean ± SD, black circles indicating individual dishes. *, <span class="html-italic">p</span> &lt; 0.05, **, <span class="html-italic">p</span> &lt; 0.01, ***, <span class="html-italic">p</span> &lt; 0.001, ****, <span class="html-italic">p</span> &lt; 0.0001.</p>
Full article ">Figure 7
<p>Temporal ORI responses of TNBC cells to FK866 inhibition. (<b>A</b>) MDA-MB-231 cells and (<b>B</b>) MDA-MB-436 cells to 1 nM FK866 treatment for 24 or 48 h treatment; (<b>C</b>) HCC1806 cells were first treated with 1 nM FK866 for 48 h then allowed 24 h to recover (Rec) after removal of FK866. Bars: mean ± SD, black circles indicating individual dishes. Asterisks by themselves indicate the comparisons between the values of control (untreated) and treatment. Asterisks with brackets in (<b>B</b>) indicate the comparisons between the values of 24 h and 48 h treatment. *, <span class="html-italic">p</span> &lt; 0.05, **, <span class="html-italic">p</span> &lt; 0.01, ***, <span class="html-italic">p</span> &lt; 0.001, ****, <span class="html-italic">p</span> &lt; 0.0001. <span class="html-italic">p</span> values for multiple comparisons between different time points in (<b>B</b>) were adjusted by Bonferroni post hoc method.</p>
Full article ">Figure 8
<p>NR rescue effects on MDA-MB-453 cells under 100 nM FK866 for 48 h. (<b>A</b>) Both NR and NAM are converted to NMN but via different enzymes. (<b>B</b>) NR (1 mM) rescue effect. Bars: mean ± SD, black circles indicating individual dishes. *, <span class="html-italic">p</span> &lt; 0.05, **, <span class="html-italic">p</span> &lt; 0.01, ***, <span class="html-italic">p</span> &lt; 0.001, ****, <span class="html-italic">p</span> &lt; 0.0001. Note: all redox indices were normalized to their respective control.</p>
Full article ">Figure 9
<p>ORI responses of AT-3 cells to Nampt inhibition and NR rescue effect. 500 nM GMX1778, 1 mM NR, or 500 nM GMX1778, 1 mM NR treatment, or NR and NAM co-treatment for 24 h. Bars: mean ± SD, black circles indicating individual dishes. *, <span class="html-italic">p</span> &lt; 0.05, **, <span class="html-italic">p</span> &lt; 0.01, ****, <span class="html-italic">p</span> &lt; 0.0001.</p>
Full article ">Figure 10
<p>(<b>A</b>) ORI responses and changes in mitochondrial ROS of AT-3 cells treated with various doses of GMX1778 for 24 h. (<b>B</b>–<b>D</b>) The significant linear correlations between mitochondrial ROS and Fp, NADH, and the redox ratio, respectively, determined from data shown in (<b>A</b>), where R<sup>2</sup> and <span class="html-italic">p</span> values are indicated on the graphs. Bars: mean ± SD, black circles indicating individual dishes. Dashed lines: 95% confidence intervals *, <span class="html-italic">p</span> &lt; 0.05, **, <span class="html-italic">p</span> &lt; 0.01, ***, <span class="html-italic">p</span> &lt; 0.001, ****, <span class="html-italic">p</span> &lt; 0.0001.</p>
Full article ">Figure 11
<p>The effects of FK866 on the mitochondrial membrane potential and mitochondrial mass indicated by fluorescence probes TMRE or MitoView Green. (<b>A</b>) A large decrease (~40%) in the mitochondrial membrane potential in MDA-MB-468 cells treated with 50 nM FK866 for 48 h; (<b>B</b>) insignificant change in the mitochondrial mass of MDA-MB-468; (<b>C</b>) dose-dependent increase in the mitochondrial mass of E0771 cells treated with either 1 nM or 100 nM FK866 for 24 h. Bars: mean ± SD, black circles indicating individual FOVs. ****, <span class="html-italic">p</span> &lt; 0.0001.</p>
Full article ">Figure 12
<p>The correlations between the mitochondrial redox indices and cell growth at various doses of FK866 and two treatment periods for E0771 cells. (<b>A</b>) The dose-dependent changes of the mitochondrial redox indices (note that this figure is the same as that shown in <a href="#cancers-17-00007-f006" class="html-fig">Figure 6</a>A); (<b>B</b>) The dose-dependent cell growth retardation due to 20 h exposure to various doses of FK866; (<b>C</b>–<b>E</b>) The significant linear correlations between the redox indices and cell growth under 20 h exposure to various doses of FK866, where R<sup>2</sup> and <span class="html-italic">p</span> values are indicated on the graphs. (<b>F</b>,<b>G</b>) The dose-dependent changes of the mitochondrial redox indices and suppressed cell growth due to 48 h treatment with various doses of FK866, respectively. Bars: mean ± SD, black circles indicating individual dishes. *, <span class="html-italic">p</span> &lt; 0.05, **, <span class="html-italic">p</span> &lt; 0.01, ***, <span class="html-italic">p</span> &lt; 0.001, ****, <span class="html-italic">p</span> &lt; 0.0001.</p>
Full article ">Figure 13
<p>Effects of 1 nM FK866 on NAD<sup>+</sup> and NADH of E0771 cell homogenates after 24 h exposure. Data were obtained with two technical and two biological replicates. Bars: mean ± SD. *, <span class="html-italic">p</span> &lt; 0.05, **, <span class="html-italic">p</span> &lt; 0.01.</p>
Full article ">Figure A1
<p>To confirm the redox responses to metabolic modulations, we performed redox titration using MDA-MB-468 cells under normal growth conditions. We observed the expected changes, i.e., a decrease in NADH due to uncoupled oxidative phosphorylation from the electron transport chain by mitochondrial uncoupler FCCP, followed by an increase in NADH due to the inhibition of complex I and III by rotenone and antimycin A (ROT/AA), respectively. Bars: mean ± SD, N = 3–4 FOVs. *, <span class="html-italic">p</span> &lt; 0.05, **, <span class="html-italic">p</span> &lt; 0.01, ****, <span class="html-italic">p</span> &lt; 0.0001.</p>
Full article ">Figure A2
<p>Temporal redox changes of E0771 cells due to various doses of FK866 treatment. Bars: mean ± SD. *, <span class="html-italic">p</span> &lt; 0.05, **, <span class="html-italic">p</span> &lt; 0.01, ***, <span class="html-italic">p</span> &lt; 0.001.</p>
Full article ">Figure A3
<p>LADH A specific inhibitor FX11 (5 µM) added to MDA-MB-231 cells exposed to 1 nM FK866 for 24 h immediately yielded a large spike of NADH and a reductive shift of the mitochondrial redox state. Bars: mean ± SD, black circles indicating individual samples. *, <span class="html-italic">p</span> &lt; 0.05, **, <span class="html-italic">p</span> &lt; 0.01, ***, <span class="html-italic">p</span> &lt; 0.001.</p>
Full article ">
23 pages, 4727 KiB  
Article
Self-Supervised and Zero-Shot Learning in Multi-Modal Raman Light Sheet Microscopy
by Pooja Kumari, Johann Kern and Matthias Raedle
Sensors 2024, 24(24), 8143; https://doi.org/10.3390/s24248143 - 20 Dec 2024
Viewed by 693
Abstract
Advancements in Raman light sheet microscopy have provided a powerful, non-invasive, marker-free method for imaging complex 3D biological structures, such as cell cultures and spheroids. By combining 3D tomograms made by Rayleigh scattering, Raman scattering, and fluorescence detection, this modality captures complementary spatial [...] Read more.
Advancements in Raman light sheet microscopy have provided a powerful, non-invasive, marker-free method for imaging complex 3D biological structures, such as cell cultures and spheroids. By combining 3D tomograms made by Rayleigh scattering, Raman scattering, and fluorescence detection, this modality captures complementary spatial and molecular data, critical for biomedical research, histology, and drug discovery. Despite its capabilities, Raman light sheet microscopy faces inherent limitations, including low signal intensity, high noise levels, and restricted spatial resolution, which impede the visualization of fine subcellular structures. Traditional enhancement techniques like Fourier transform filtering and spectral unmixing require extensive preprocessing and often introduce artifacts. More recently, deep learning techniques, which have shown great promise in enhancing image quality, face their own limitations. Specifically, conventional deep learning models require large quantities of high-quality, manually labeled training data for effective denoising and super-resolution tasks, which is challenging to obtain in multi-modal microscopy. In this study, we address these limitations by exploring advanced zero-shot and self-supervised learning approaches, such as ZS-DeconvNet, Noise2Noise, Noise2Void, Deep Image Prior (DIP), and Self2Self, which enhance image quality without the need for labeled and large datasets. This study offers a comparative evaluation of zero-shot and self-supervised learning methods, evaluating their effectiveness in denoising, resolution enhancement, and preserving biological structures in multi-modal Raman light sheet microscopic images. Our results demonstrate significant improvements in image clarity, offering a reliable solution for visualizing complex biological systems. These methods establish the way for future advancements in high-resolution imaging, with broad potential for enhancing biomedical research and discovery. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Schematic of the multi-modal Raman light sheet microscope and (<b>b</b>) Raman image and spectral data acquired using a multi-modal Raman light sheet microscope with a 660 nm excitation laser and an acousto-optic tunable filter (AOTF) set at 817 nm.</p>
Full article ">Figure 2
<p>(<b>a</b>) Original images acquired using Multi-Modal Raman Light Sheet Microscopy (<b>b</b>) Implementation of zero-shot and self-supervised learning algorithms (ZS-DeconvNet, Noise2Noise, Noise2Void, DIP and Self2Self) on Original Images after Preprocessing Techniques (<b>c</b>) Denoised Output Images evaluated using metrics (PSNR, SSIM, RMSE, FRC).</p>
Full article ">Figure 3
<p>Visual comparison of original and denoised images for all zero-shot and self-supervised learning models for the 785 nm laser and Rayleigh scattering for the untreated 14C samples (refer to <a href="#sec2dot1-sensors-24-08143" class="html-sec">Section 2.1</a>).</p>
Full article ">Figure 4
<p>PSNR (<b>a</b>), SSIM (<b>b</b>), and RMSE (<b>c</b>) histograms and FRC curves (<b>d</b>) for the 14C-untreated samples and Rayleigh scattering using the 785 nm laser.</p>
Full article ">Figure 5
<p>Visual comparison of original and denoised images for all zero-shot and self-supervised learning models for the 660 nm laser and fluorescence scattering for the untreated 14C samples (refer to <a href="#sec2dot1-sensors-24-08143" class="html-sec">Section 2.1</a>).</p>
Full article ">Figure 6
<p>PSNR (<b>a</b>), SSIM (<b>b</b>), and RMSE (<b>c</b>) histograms and FRC curves (<b>d</b>) for the 14C-untreated samples and fluorescence scattering using the 785 nm laser.</p>
Full article ">Figure 7
<p>Visual comparison of original and denoised images for all zero-shot and self-supervised learning models for the 660 nm laser and Raman scattering for the treated 11B samples (refer to <a href="#sec2dot1-sensors-24-08143" class="html-sec">Section 2.1</a>).</p>
Full article ">Figure 8
<p>PSNR (<b>a</b>), SSIM (<b>b</b>), and RMSE (<b>c</b>) histograms and FRC curves (<b>d</b>) for the 11B-treated samples and Raman scattering using the 660 nm laser.</p>
Full article ">Figure 9
<p>Visual comparison of original and denoised images for all zero-shot and self-supervised learning models for the 660 nm laser and Raman scattering for the untreated 11B samples (refer to <a href="#sec2dot1-sensors-24-08143" class="html-sec">Section 2.1</a>).</p>
Full article ">Figure 10
<p>PSNR (<b>a</b>), SSIM (<b>b</b>), and RMSE (<b>c</b>) histograms and FRC curves (<b>d</b>) for the 11B-untreated samples and Raman scattering using the 660 nm laser.</p>
Full article ">Figure 11
<p>Loss vs. epoch curves for all zero-shot and self-supervised learning algorithms, reflecting overall training performance across all modalities described in <a href="#sec3dot1-sensors-24-08143" class="html-sec">Section 3.1</a>.</p>
Full article ">
13 pages, 1229 KiB  
Article
Image Quality Assessment and Reliability Analysis of Artificial Intelligence-Based Tumor Classification of Stimulated Raman Histology of Tumor Biobank Samples
by Anna-Katharina Meißner, Tobias Blau, David Reinecke, Gina Fürtjes, Lili Leyer, Nina Müller, Niklas von Spreckelsen, Thomas Stehle, Abdulkader Al Shugri, Reinhard Büttner, Roland Goldbrunner, Marco Timmer and Volker Neuschmelting
Diagnostics 2024, 14(23), 2701; https://doi.org/10.3390/diagnostics14232701 - 30 Nov 2024
Viewed by 722
Abstract
Background: Stimulated Raman histology (SRH) is a label-free optical imaging method for rapid intraoperative analysis of fresh tissue samples. Analysis of SRH images using Convolutional Neural Networks (CNN) has shown promising results for predicting the main histopathological classes of neurooncological tumors. Due to [...] Read more.
Background: Stimulated Raman histology (SRH) is a label-free optical imaging method for rapid intraoperative analysis of fresh tissue samples. Analysis of SRH images using Convolutional Neural Networks (CNN) has shown promising results for predicting the main histopathological classes of neurooncological tumors. Due to the relatively low number of rare tumor representations in CNN training datasets, a valid prediction of rarer entities remains limited. To develop new reliable analysis tools, larger datasets and greater tumor variety are crucial. One way to accomplish this is through research biobanks storing frozen tumor tissue samples. However, there is currently no data available regarding the pertinency of previously frozen tissue samples for SRH analysis. The aim of this study was to assess image quality and perform a comparative reliability analysis of artificial intelligence-based tumor classification using SRH in fresh and frozen tissue samples. Methods: In a monocentric prospective study, tissue samples from 25 patients undergoing brain tumor resection were obtained. SRH was acquired in fresh and defrosted samples of the same specimen after varying storage durations at −80 °C. Image quality was rated by an experienced neuropathologist, and prediction of histopathological diagnosis was performed using two established CNNs. Results: The image quality of SRH in fresh and defrosted tissue samples was high, with a mean image quality score of 1.96 (range 1–5) for both groups. CNN analysis showed high internal consistency for histo-(Cα 0.95) and molecular (Cα 0.83) pathological tumor classification. The results were confirmed using a dataset with samples from the local tumor biobank (Cα 0.91 and 0.53). Conclusions: Our results showed that SRH appears comparably reliable in fresh and frozen tissue samples, enabling the integration of tumor biobank specimens to potentially improve the diagnostic range and reliability of CNN prediction tools. Full article
(This article belongs to the Special Issue Artificial Intelligence in Pathological Image Analysis—2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Workflow of the test–retest analysis of the patient dataset. A small (3-4 mm) tissue sample (sample #1) was collected during surgery and immediately processed for SRH imaging. The fresh squash preparation was scanned in the SRH microscope (scan 1 sample #1) and frozen at −80 °C afterwards. After varying time intervals, the sample was defrosted and scanned again in the SRH microscope (re-scan sample #1). All SRH images from fresh and frozen samples were assessed for image quality and occurrence of freezing artifacts by an experienced neuropathologist and analyzed by the CNNs. CNN: Convolutional Neural Network.</p>
Full article ">Figure 2
<p>SRH images of fresh and thawed tissue samples. Upper row: SRH images of a meningioma (CNS WHO grade 1) ((<b>A</b>): scan 1 sample #1, fresh; (<b>B</b>) re-scan 1, sample #1, defrosted), showing typical histologic features, such as meningothelial whorls (green arrows). Middle row: SRH images of a pulmonary adenocarcinoma metastasis ((<b>C</b>) scan 1, sample #1, fresh; (<b>D</b>) re-scan 1, sample #1, defrosted), showing sheets of epithelial tumor cells. Lower row: SRH images of a glioblastoma, IDH wild type (CNS WHO grade 4) ((<b>E</b>) scan 1, sample #1, fresh; (<b>F</b>) re-scan 1, sample #1, defrosted), showing infiltration of fibrillary tumor.</p>
Full article ">Figure 3
<p>Confusion matrix of the CNN-based histological entity differentiation (<b>left</b>), and diffuse adult-type glioma subclassification (<b>right</b>) in fresh and frozen tumor tissue samples from the same patient. Ca = Cronbach’s alpha.</p>
Full article ">Figure 4
<p>Confusion matrix of the CNN-based histological entity differentiation (<b>left</b>), and diffuse adult-type glioma subclassification (<b>right</b>) in tumor biobank samples comparing SRH images of fresh and frozen tumor samples. Ca = Cronbach’s alpha.</p>
Full article ">
14 pages, 4119 KiB  
Article
Revolutionizing Epithelial Differentiability Analysis in Small Airway-on-a-Chip Models Using Label-Free Imaging and Computational Techniques
by Shiue-Luen Chen, Ren-Hao Xie, Chong-You Chen, Jia-Wei Yang, Kuan-Yu Hsieh, Xin-Yi Liu, Jia-Yi Xin, Ching-Kai Kung, Johnson H. Y. Chung and Guan-Yu Chen
Biosensors 2024, 14(12), 581; https://doi.org/10.3390/bios14120581 - 29 Nov 2024
Viewed by 1092
Abstract
Organ-on-a-chip (OOC) devices mimic human organs, which can be used for many different applications, including drug development, environmental toxicology, disease models, and physiological assessment. Image data acquisition and analysis from these chips are crucial for advancing research in the field. In this study, [...] Read more.
Organ-on-a-chip (OOC) devices mimic human organs, which can be used for many different applications, including drug development, environmental toxicology, disease models, and physiological assessment. Image data acquisition and analysis from these chips are crucial for advancing research in the field. In this study, we propose a label-free morphology imaging platform compatible with the small airway-on-a-chip system. By integrating deep learning and image recognition techniques, we aim to analyze the differentiability of human small airway epithelial cells (HSAECs). Utilizing cell imaging on day 3 of culture, our approach accurately predicts the differentiability of HSAECs after 4 weeks of incubation. This breakthrough significantly enhances the efficiency and stability of establishing small airway-on-a-chip models. To further enhance our analysis capabilities, we have developed a customized MATLAB program capable of automatically processing ciliated cell beating images and calculating the beating frequency. This program enables continuous monitoring of ciliary beating activity. Additionally, we have introduced an automated fluorescent particle tracking system to evaluate the integrity of mucociliary clearance and validate the accuracy of our deep learning predictions. The integration of deep learning, label-free imaging, and advanced image analysis techniques represents a significant advancement in the fields of drug testing and physiological assessment. This innovative approach offers unprecedented insights into the functioning of the small airway epithelium, empowering researchers with a powerful tool to study respiratory physiology and develop targeted interventions. Full article
(This article belongs to the Special Issue Biosensors for Organ-on-Chip Devices)
Show Figures

Figure 1

Figure 1
<p>Establish and analyze the small airway epithelium differentiation model. (<b>a</b>) Schematic diagram of small airway-on-chip. The chip is a two-channel microfluidic cell culture device composed of air and medium channels. (<b>b</b>) Illustration of chip holder and cell image observation platform. (<b>c</b>) Timeline of HSAEC differentiation. (<b>d</b>) The confocal images show the ciliated, goblet cells and barrier function distribution. Scale bar = 50 μm. (<b>e</b>) 3D confocal image of a small airway-on-a-chip model. Scale bar = 50 μm. (<b>f</b>) High content image of a high resolution (20<span class="html-italic">×</span>) scan. Cells were stained with DAPI (blue), Ac-tubulin (yellow) and MUC5B (green). Scale bar = 500 μm and 100 μm.</p>
Full article ">Figure 2
<p>Establish and analyze the small airway epithelium infection differentiation model. (<b>a</b>) Use the day 3 bright-field images to create a dataset and use the ZO-1 performance of the day 33 in the same area as the label standard. Scale bar = 50 μm. (<b>b</b>) The established dataset is augmented by data, and then 5-fold cross-validation is used to verify the CNN model to distinguish success tissue or fail tissue. (<b>c</b>) Prediction results of four classic CNN models. Data are shown as the mean  ±  SD, and one-way ANOVA determines statistical significance. ns: not significant; ** <span class="html-italic">p</span> &lt; 0.01. (<b>d</b>) Overall confusion matrix generated by ResNet after 50 epochs of training, the horizontal axis refers to the grade predicted from the model, and the vertical axis is based on the marker of ZO-1. The blue brightness is proportional to the value of each cell in the matrix. (<b>e</b>) ScoreCAM highlights the grabbed feature regions, blue for low attention and red for close attention. Scale bar = 50 μm.</p>
Full article ">Figure 3
<p>The flow chart of the ciliary beating frequency analysis system. (<b>a</b>) Schematic diagram of ciliary beating corresponding to changes in light intensity. (<b>b</b>) Input each frame of ciliary beating video. (<b>c</b>) Detect light intensity changes and convert to grayscale signals. (<b>d</b>) Filter the noise with a high-pass filter. (<b>e</b>) Obtain ciliary beating positions and labeling. Scale bar = 300 μm. (<b>f</b>) Record the number of light intensity changes at each position. (<b>g</b>) The output of ciliary beating area’s distribution corresponds to the beating frequency. Scale bar = 300 μm.</p>
Full article ">Figure 4
<p>Establishment of fluorescent particle tracking system. (<b>a</b>) Schematic diagram of mucociliary clearance observed by fluorescent particles adhering to the mucus layer. (<b>b</b>) Input the fluorescent particle video and preprocess each frame. (<b>c</b>) Take the first frame to find out the fluorescent particles position. Scale bar = 100, 50 μm. (<b>d</b>) Filter low brightness and turn grayscale. Scale bar = 50 μm. (<b>e</b>) The system can automatically locate the particle center. Scale bar = 50 μm. (<b>f</b>) Detect the position of the next frame. Scale bar = 50 μm. (<b>g</b>) The output of particle trajectories. Trace color changes over time. Scale bar = 100 μm. (<b>h</b>) Using a rose diagram represents the particles’ moving direction and moving distance.</p>
Full article ">Scheme 1
<p>Advanced Image Analysis Methods on Small Airway-on-a-Chip Models. Overview of this study, deep learning was used to predict the differentiation of primary human small airway epithelial cells on an organ-on-a-chip. (<b>Left</b>) To observe ciliary beating frequency (CBF) in real time, use MATLAB (R2022a) software development for movie-based automated ciliated cells labeling and CBF calculation. (<b>Middle</b>) Finally, under the evaluation of air pollution conditions, the establishment of automated particle tracking technology helps us quickly understand the movement trajectory and directionality of fluorescent particles in the small airway chip (<b>Right</b>).</p>
Full article ">
29 pages, 4275 KiB  
Review
Artificial Intelligence-Assisted Stimulated Raman Histology: New Frontiers in Vibrational Tissue Imaging
by Manu Krishnan Krishnan Nambudiri, V. G. Sujadevi, Prabaharan Poornachandran, C. Murali Krishna, Takahiro Kanno and Hemanth Noothalapati
Cancers 2024, 16(23), 3917; https://doi.org/10.3390/cancers16233917 - 22 Nov 2024
Viewed by 1578
Abstract
Frozen section biopsy, introduced in the early 1900s, still remains the gold standard methodology for rapid histologic evaluations. Although a valuable tool, it is labor-, time-, and cost-intensive. Other challenges include visual and diagnostic variability, which may complicate interpretation and potentially compromise the [...] Read more.
Frozen section biopsy, introduced in the early 1900s, still remains the gold standard methodology for rapid histologic evaluations. Although a valuable tool, it is labor-, time-, and cost-intensive. Other challenges include visual and diagnostic variability, which may complicate interpretation and potentially compromise the quality of clinical decisions. Raman spectroscopy, with its high specificity and non-invasive nature, can be an effective tool for dependable and quick histopathology. The most promising modality in this context is stimulated Raman histology (SRH), a label-free, non-linear optical process which generates conventional H&E-like images in short time frames. SRH overcomes limitations of conventional Raman scattering by leveraging the qualities of stimulated Raman scattering (SRS), wherein the energy gets transferred from a high-power pump beam to a probe beam, resulting in high-energy, high-intensity scattering. SRH’s high resolution and non-requirement of preprocessing steps make it particularly suitable when it comes to intrasurgical histology. Combining SRH with artificial intelligence (AI) can lead to greater precision and less reliance on manual interpretation, potentially easing the burden of the overburdened global histopathology workforce. We review the recent applications and advances in SRH and how it is tapping into AI to evolve as a revolutionary tool for rapid histologic analysis. Full article
(This article belongs to the Special Issue Advanced Research in Oncology in 2024)
Show Figures

Figure 1

Figure 1
<p>Overview of SRH workflow. (<b>a</b>) The tumor specimen obtained intra operatively is loaded onto slides and SRH imaging is performed. Stokes and pump lasers illuminate the sample and (<b>b</b>) induced molecular vibrations within the sample. The laser excitation causes energy transitions as shown in (<b>c</b>). The molecular perturbations produce coherent Raman scattered photons that will be collected and pseudo-colored to generate stimulated Raman histology images, as shown in (<b>d</b>). In (<b>e</b>), the resultant images are processed using advanced AI modalities to identify regions of different pathologic features which are heat mapped, as in (<b>f</b>), for easy processing by pathologists.</p>
Full article ">Figure 2
<p>(<b>a</b>) The representation portrays the process of gastroscopy and the collection of fresh biopsies for direct SRS imaging. (<b>b</b>) It features the properties of Femto-SRS and Pico-SRS, including pulse chirping, spectral resolution and the conversion of a single Femto-SRS image into a pair of Pico-SRS images using deep U-Net. (<b>c</b>) Multi-chemical imaging of gastric tissue including lipid, protein and collagen fibers visualized through converted Femto-SRS and SHG channels, color-coded to SRH. Scale bars: 50 µm. (adopted from Liu et al. [<a href="#B85-cancers-16-03917" class="html-bibr">85</a>]).</p>
Full article ">Figure 3
<p>(<b>a</b>) Deep U-Net based Femto-SRS imaging originally acquired Pico-SRS images of two channels (ground truth): the single channel Femto-SRS raw image (input), and the U-Net based prediction. Scale bars: 50 µm. (<b>b</b>) Intensity profiles corresponding to the dashed lines in (<b>a</b>) of the predicted and ground-truth data, showing chemical contrast in the cell nucleus regions (marked in yellow arrows (<b>a</b>) and in grey (<b>b</b>). (Source Liu et al. [<a href="#B85-cancers-16-03917" class="html-bibr">85</a>]).</p>
Full article ">Figure 4
<p>(<b>a</b>) The process flow of the SRH analysis. The tissue extracted during excision surgery was analyzed using SRH and H&amp;E analysis. SRH image analysis was performed using U-Net as option 1. Option 2 performs H&amp;E staining and subsequent analysis using K-means clustering. The outputs from option 1 and option 2 were subjected to morphological operation. (<b>b</b>) Cell segmentation and identification results in a FOV, where the number of cells for each patch is mapped to visualize cell distribution within a sample (Adopted from Zhang et al. [<a href="#B103-cancers-16-03917" class="html-bibr">103</a>]).</p>
Full article ">Figure 5
<p>(<b>A</b>) (<b>a</b>–<b>d</b>) The SRH and CNN workflow for the automated detection of recurrent glioma. (<b>a</b>) A 1 × 1-mm SRH image is captured in about 60 s, (<b>b</b>) which gets split into 300 × 300-pixel patches using a dense sliding window method. (<b>c</b>) Each patch is analyzed by a feedforward CNN. (<b>d</b>) The final softmax layer produces a categorical probability distribution across classes: recurrence, pseudo-progression/treatment effect, and nondiagnostic. (<b>e</b>) An aggregation algorithm aggregates patch-level prediction probabilities to yield a single probability of recurrence for each specimen or patient. Scale bars = 50 μm. (<b>B</b>) Probability heatmaps for each of the three output classes are generated using patch-level predictions obtained from a dense, overlapping sliding window algorithm. This method ensures that each pixel in the image has a corresponding probability distribution, resulting in high-resolution, smoother heatmaps. (<b>C</b>) Each heatmap is assigned to an RGB channel, producing an overlay of predictions on the entire SRH slide. An SRH image from a patient with recurrent glioblastoma is shown, where dense tumor areas (red) are highlighted alongside nondiagnostic regions such as hemorrhagic and necrotic tissue (blue) and gliotic brain tissue (green). This semantic segmentation technique enhances the interpretation of SRH images by combining CNN predictions with spatial information about recurrent tumor areas. Scale bars = 50 μm (adopted from Hollon et al. [<a href="#B91-cancers-16-03917" class="html-bibr">91</a>]).</p>
Full article ">
12 pages, 5537 KiB  
Article
Accompanying Hemoglobin Polymerization in Red Blood Cells in Patients with Sickle Cell Disease Using Fluorescence Lifetime Imaging
by Fernanda Aparecida Borges da Silva, João Batista Florindo, Amilcar Castro de Mattos, Fernando Ferreira Costa, Irene Lorand-Metze and Konradin Metze
Int. J. Mol. Sci. 2024, 25(22), 12290; https://doi.org/10.3390/ijms252212290 - 15 Nov 2024
Viewed by 995
Abstract
In recent studies, it has been shown that fluorescence lifetime imaging (FLIM) may reveal intracellular structural details in unstained cytological preparations that are not revealed by standard staining procedures. The aim of our investigation was to examine whether FLIM images could reveal areas [...] Read more.
In recent studies, it has been shown that fluorescence lifetime imaging (FLIM) may reveal intracellular structural details in unstained cytological preparations that are not revealed by standard staining procedures. The aim of our investigation was to examine whether FLIM images could reveal areas suggestive of polymerization in red blood cells (RBCs) of sickle cell disease (SCD) patients. We examined label-free blood films using auto-fluorescence FLIM images of 45 SCD patients and compared the results with those of 27 control persons without hematological disease. All control RBCs revealed homogeneous cytoplasm without any foci. Rounded non-sickled RBCs in SCD showed between zero and three small intensively fluorescent dots with higher lifetime values. In sickled RBCs, we found additionally larger irregularly shaped intensively fluorescent areas with increased FLIM values. These areas were interpreted as equivalent to polymerized hemoglobin. The rounded, non-sickled RBCs of SCD patients with homogeneous cytoplasm were not different from those of the erythrocytes of control patients in light microscopy. Yet, variables from the local binary pattern-transformed matrix of the FLIM values per pixel showed significant differences between non-sickled RBCs and those of control cells. In a linear discriminant analysis, using local binary pattern-transformed texture features (mean and entropy) of the erythrocyte cytoplasm of normal appearing cells, the final model could distinguish between SCD patients and control persons with an accuracy of 84.7% of the patients. When the classification was based on the examination of a single rounded erythrocyte, an accuracy of 68.5% was achieved. Employing the Linear Discriminant Analysis classifier method for machine learning, the accuracy was 68.1%. We believe that our study shows that FLIM is able to disclose the topography of the intracellular polymerization process of hemoglobin in sickle cell disease and that the images are compatible with the theory of the two-step nucleation. Furthermore, we think that the presented technique may be an interesting tool for the investigation of therapeutic inhibition of polymerization. Full article
Show Figures

Figure 1

Figure 1
<p>Image of a peripheral blood film of a control case with several normal RBCs. Upper left: auto-fluorescence picture. Upper middle: fluorescence lifetime image: the blue color corresponds to the lifetime of hemoglobin. Surrounding plasma in green/yellow color corresponding to a higher lifetime. A cursor is placed on a RBC (right inferior corner). Upper right: histogram of the lifetime distribution of the image (pseudo-colors according to the rainbow spectrum). Blue represents the shortest lifetime and red is the longest. The histogram shows that hemoglobin has a short lifetime. Below is the fluorescence lifetime decay curve of the selected spot in the image. Every dot represents a single photon.</p>
Full article ">Figure 2
<p>Upper left: autofluorescence and FLIM images of a patient with homozygous SS hemoglobinopathy. Two normal-shaped and one sickled RBC. Each of the normal looking ones shows one highly fluorescent dot. The sickled RBC has areas with a higher fluorescence suggestive of polymerization. The histogram on the right side represents the lifetime of the region where the cursor is placed. Lower right is the decay curve of the selected region of interest.</p>
Full article ">Figure 3
<p>RBCs from a blood smear of a patient with SC hemoglobinopathy. Left: auto-fluorescence and right FLIM image. In the center, a sickled cell with an irregular heterogeneous area of enhanced fluorescence revealing a higher lifetime value in the FLIM compared to the surrounding cytoplasm. Some of the non-sickled RBCs show highly fluorescent dots.</p>
Full article ">Figure 4
<p>RBCs from a blood smear of S beta-thalassemia hemoglobinopathy. Three entire sickled cells with irregular, sometimes heterogeneous areas with enhanced fluorescence revealing higher lifetime values compared to the surrounding cytoplasm. Some of the non-sickled RBCs show highly fluorescent dots.</p>
Full article ">Figure 5
<p>Distribution of non-sickled cells in patients with SCD according to the sub-types: SS in black, SC in green, and S thalassemia in orange. There were no significant differences among the different sub-types of SCD.</p>
Full article ">Figure 6
<p>Dot plot showing the distribution of LBP mean (Y axis) and LBP entropy (X axis) to show the distribution of each cell in the control group (blue), and SCD: non-sickled cells are in green and sickled cells are in orange. Both parameters were able to discriminate between normal and SCD in 84.7% of the cases.</p>
Full article ">
19 pages, 8697 KiB  
Review
In Situ and Label-Free Quantification of Membrane Protein–Ligand Interactions Using Optical Imaging Techniques: A Review
by Caixin Huang, Jingbo Zhang, Zhaoyang Liu, Jiying Xu, Ying Zhao and Pengfei Zhang
Biosensors 2024, 14(11), 537; https://doi.org/10.3390/bios14110537 - 6 Nov 2024
Viewed by 1177
Abstract
Membrane proteins are crucial for various cellular processes and are key targets in pharmacological research. Their interactions with ligands are essential for elucidating cellular mechanisms and advancing drug development. To study these interactions without altering their functional properties in native environments, several advanced [...] Read more.
Membrane proteins are crucial for various cellular processes and are key targets in pharmacological research. Their interactions with ligands are essential for elucidating cellular mechanisms and advancing drug development. To study these interactions without altering their functional properties in native environments, several advanced optical imaging methods have been developed for in situ and label-free quantification. This review focuses on recent optical imaging techniques such as surface plasmon resonance imaging (SPRi), surface plasmon resonance microscopy (SPRM), edge tracking approaches, and surface light scattering microscopy (SLSM). We explore the operational principles, recent advancements, and the scope of application of these methods. Additionally, we address the current challenges and explore the future potential of these innovative optical imaging strategies in deepening our understanding of biomolecular interactions and facilitating the discovery of new therapeutic agents. Full article
(This article belongs to the Special Issue Feature Paper in Biosensor and Bioelectronic Devices 2024)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of SPRi for in situ analysis of membrane protein–ligand interactions, reprinted with permission from Ref. [<a href="#B26-biosensors-14-00537" class="html-bibr">26</a>].</p>
Full article ">Figure 2
<p>Schematic diagram of SPRM. (<b>a</b>) Schematic illustration of the experimental set-up. (<b>b</b>) The entire cell bottom membrane and part of the cell top membrane in the cell edge regions are located within the typical detection depth of the SPRM. From the bottom up, examples of bright-field, fluorescence and SPR images, respectively, reprinted with permission from Ref. [<a href="#B30-biosensors-14-00537" class="html-bibr">30</a>].</p>
Full article ">Figure 3
<p>Schematic diagram of PEIM. (<b>a</b>) Schematic of the experimental setup. (<b>b</b>) Examples of optical, SPR and EIM images of 200 nm silica nanoparticles, demonstrating the spatial resolution of the systems, reprinted with permission from Ref. [<a href="#B28-biosensors-14-00537" class="html-bibr">28</a>].</p>
Full article ">Figure 4
<p>Schematic of edge tracking approach. (<b>A</b>) Schematic illustration of the experimental setup based on an inverted phase-contrast microscope with a 40× phase objective. (<b>B</b>) Differential optical detection for accurate tracking of cell edge changes induced by analyte–receptor interaction. (<b>C</b>) Schematic of a typical binding curve as determined from the cell edge movement. (<b>D</b>) The root mean square of the fixed cell edge change is 0.46 nm. (<b>E</b>) Illustration of cell edge changes over time during the binding process where i, ii, and iii correspond to the stages marked in (<b>C</b>). Blue and red rectangles in (<b>B</b>,<b>E</b>) are the ROIs for differential detection, reprinted with permission from Ref. [<a href="#B34-biosensors-14-00537" class="html-bibr">34</a>].</p>
Full article ">Figure 5
<p>Comparison of SPRM with PSM. (<b>A</b>) Simplified sketch of the optical setup for SPRM, and SPRM image of one 100 nm polystyrene nanoparticle. (<b>B</b>–<b>D</b>) Bright field and SPRM images of fixed A431, HeLa, and RBL-2H3 cells. (<b>E</b>) Simplified sketch of the optical setup for PSM, and PSM image of one 100 nm PSNP. (<b>F</b>–<b>H</b>) Bright field and PSM images of fixed A431, HeLa, and RBL-2H3 cells, reprinted with permission from Ref. [<a href="#B45-biosensors-14-00537" class="html-bibr">45</a>].</p>
Full article ">Figure 6
<p>In situ analysis of EGFR–antibody interactions. (<b>a</b>) A typical SPRi image of a few tens of A431 cells adhered on the gold-coated glass slide. (<b>b</b>) Differential SPR image shows the maximum SPR intensity increase due to anti-EGFR binding to the surface of A431 cells. (<b>c</b>) The average SPR sensorgrams of all cells in view (black curves, average SPR sensorgram; red curve, curve fitting; gray background, cell-to-cell variation) and the surrounding regions without cell coverage (blue curve). (<b>d</b>) The SPR sensorgrams of five individual cells of different regions (gray dotted curves, individual SPR sensorgram; colored curve, corresponding fitting curves of colored circles marked area in (<b>a</b>)), reprinted with permission from Ref. [<a href="#B26-biosensors-14-00537" class="html-bibr">26</a>].</p>
Full article ">Figure 7
<p>Single-cell analysis with SPRM. (<b>a</b>,<b>b</b>) The bright-field and SPRM images of a SH-EP1 cell. (<b>c</b>) The epifluorescence image of the same cell stained with Alexa Fluor 555-labeled WGA with a focus on the bottom cell membrane portion (white arrows indicate the borderline between the thick cell body (in the centre) and the thin cell membrane (at the edge)). (<b>d</b>) SPR sensorgrams of the entire cell region (black curve), cell edge region (red curve) and cell central region (blue curve) during the binding and dissociation of WGA. (<b>e</b>) SPR sensorgrams of the cell edge region (black curves) and global fitting (red curves) with WGA solutions of different concentrations. (<b>f</b>) <span class="html-italic">K<sub>D</sub></span> was determined as 0.32 mM by plotting the concentration-dependent equilibrium responses, reprinted with permission from Ref. [<a href="#B30-biosensors-14-00537" class="html-bibr">30</a>].</p>
Full article ">Figure 8
<p>In situ analysis with edge tracking approach. (<b>A</b>) different concentrations of WGA, (<b>B</b>) different concentrations of RCA, and (<b>C</b>) different lectins (WGA, RCA, PHA, PSA, ConA). (<b>D</b>) Statistical results of associate rate constant, (<b>E</b>) dissociate rate constant, and (<b>F</b>) dissociation constant for three lectins (WGA, RCA, PHA) with obvious binding interaction with red blood cells, reprinted with permission from Ref. [<a href="#B41-biosensors-14-00537" class="html-bibr">41</a>].</p>
Full article ">Figure 9
<p>In situ analysis of small molecules interacting with membrane proteins with edge tracking approaches. (<b>a</b>) WGA binding to glycoprotein on SH-EP1 cells. (<b>b</b>) Acetylcholine binding to nicotinic acetylcholine receptors (ion channel) on SH-EP1-α4β2 cells. (<b>c</b>) AMD3100 binding to CXCR-4 receptors (GPCR) on A549 cells. (<b>d</b>) Insulin binding to insulin receptors (tyrosine kinase receptor) on Hep G2 cells, reprinted with permission from Ref. [<a href="#B39-biosensors-14-00537" class="html-bibr">39</a>].</p>
Full article ">Figure 10
<p>In situ analysis of small molecules interacting with membrane proteins with ESM. (<b>a</b>,<b>d</b>) Bright field, ESM images, and spring constant map of the A431 cell interacting with 300 nM (<b>a</b>), and 900 nM (<b>d</b>) erlotinib. (<b>b</b>,<b>e</b>) Image intensity variation against time during the association and dissociation phases for the A431 cell shown in (<b>a</b>,<b>d</b>). The association phase was achieved during flowing the erlotinib solution, and the dissociation phase was achieved during flowing the live cell imaging solution. (<b>c</b>,<b>f</b>) Spring constant variation against time during the association and dissociation phases for the A431 cell shown in (<b>a</b>,<b>d</b>), reprinted with permission from Ref. [<a href="#B43-biosensors-14-00537" class="html-bibr">43</a>].</p>
Full article ">Figure 11
<p>High throughput and in situ analysis of lectin interacting with membrane proteins with ESM. (<b>A</b>) Bright field and ESM image of live A431 cells. (<b>B</b>) The image intensity variation against time achieved by averaging the signal of all cells within the field of view. (<b>C</b>) Zoomed views of marked region at 0 s and 216 s after changing the flow to WGA solution, and the differential image. (<b>D</b>) The image intensity variation against time achieved from the cell in the marked zone in (<b>A</b>). (<b>E</b>–<b>H</b>) Statistical distributions of association rate constant, dissociation rate constant, dissociation constant and maximum response value in the binding curves achieved from the individual cells, reprinted with permission from Ref. [<a href="#B44-biosensors-14-00537" class="html-bibr">44</a>].</p>
Full article ">Figure 12
<p>High throughput and in situ analysis of small-molecule ligands interacting with membrane proteins with ESM. (<b>A</b>) Bright field image and spring constant map of live A431 cells. (<b>B</b>) The spring constant variation against time achieved by averaging the signal of all cells within the field of view. (<b>C</b>) Zoomed views of marked region at 0 s and 220 s after changing the flow to 1 μM erlotinib, and the differential image. (<b>D</b>) The spring constants variation against time achieved from the cell in marked zone in (<b>A</b>). (<b>E</b>–<b>H</b>) Statistical distributions of the association rate constant, dissociation rate constant, dissociation constant and maximum response value in the binding curves, reprinted with permission from Ref. [<a href="#B44-biosensors-14-00537" class="html-bibr">44</a>].</p>
Full article ">
16 pages, 5991 KiB  
Article
Advanced Imaging Integration: Multi-Modal Raman Light Sheet Microscopy Combined with Zero-Shot Learning for Denoising and Super-Resolution
by Pooja Kumari, Shaun Keck, Emma Sohn, Johann Kern and Matthias Raedle
Sensors 2024, 24(21), 7083; https://doi.org/10.3390/s24217083 - 3 Nov 2024
Cited by 2 | Viewed by 1772
Abstract
This study presents an advanced integration of Multi-modal Raman Light Sheet Microscopy with zero-shot learning-based computational methods to significantly enhance the resolution and analysis of complex three-dimensional biological structures, such as 3D cell cultures and spheroids. The Multi-modal Raman Light Sheet Microscopy system [...] Read more.
This study presents an advanced integration of Multi-modal Raman Light Sheet Microscopy with zero-shot learning-based computational methods to significantly enhance the resolution and analysis of complex three-dimensional biological structures, such as 3D cell cultures and spheroids. The Multi-modal Raman Light Sheet Microscopy system incorporates Rayleigh scattering, Raman scattering, and fluorescence detection, enabling comprehensive, marker-free imaging of cellular architecture. These diverse modalities offer detailed spatial and molecular insights into cellular organization and interactions, critical for applications in biomedical research, drug discovery, and histological studies. To improve image quality without altering or introducing new biological information, we apply Zero-Shot Deconvolution Networks (ZS-DeconvNet), a deep-learning-based method that enhances resolution in an unsupervised manner. ZS-DeconvNet significantly refines image clarity and sharpness across multiple microscopy modalities without requiring large, labeled datasets, or introducing artifacts. By combining the strengths of multi-modal light sheet microscopy and ZS-DeconvNet, we achieve improved visualization of subcellular structures, offering clearer and more detailed representations of existing data. This approach holds significant potential for advancing high-resolution imaging in biomedical research and other related fields. Full article
Show Figures

Figure 1

Figure 1
<p>Principle of light sheet microscopy. Excitation and collection axes are orthogonally oriented with the sample placed at their intersection. A laser beam is shaped into a sheet and illuminates a thin section of the sample in the focal plane of the detection objective. The objective images the plane onto a camera chip [<a href="#B2-sensors-24-07083" class="html-bibr">2</a>].</p>
Full article ">Figure 2
<p>(<b>a</b>) Isometric view of the Raman light sheet microscope CAD model with connected sCMOS camera. The colored lines indicate the optical path of the illuminating lasers. Red: 660 nm beam propagation. Green: 785 nm beam propagation. Blue coaxial superimposed 660 nm and 785 nm beam propagation [<a href="#B2-sensors-24-07083" class="html-bibr">2</a>]. (<b>b</b>) Sample chamber (24), where sample is placed.</p>
Full article ">Figure 3
<p>(<b>a</b>) CAD model of a multi-view sample carrier and corresponding frame for embedding spheroid samples in hydrogels. This sample is located in the sample chamber shown in <a href="#sensors-24-07083-f002" class="html-fig">Figure 2</a>b. (<b>b</b>) Sample holder system consisting of gel chamber with cylindrical extension, casting frame and negative mold for precise, reproducible embedding of spheroids in hydrogel.</p>
Full article ">Figure 4
<p>(<b>a</b>) The Zero-Shot Deconvolution Network (ZS-DeconvNet) architecture outlines the training workflow, encompassing pre-processing steps—such as corrupted image generation and median filter-based denoising—as well as post-processing techniques, including region-of-interest (ROI) image enhancement and morphological operations. The network’s performance is assessed using PSNR, SSIM, and RMSE metrics to achieve enhanced image quality in Raman light sheet microscopy. (<b>b</b>,<b>c</b>) represent the input (<b>b</b>) and output (<b>c</b>) of the ZS-DeconvNet architecture, as depicted in (<b>a</b>).</p>
Full article ">Figure 5
<p>Comparison of original 11B-Untreated (<b>a</b>) and 11B-Treated Cells (<b>b</b>) images and denoised images after ZS-DeconvNet obtained using laser excitation at 660 nm and AOTF at 650 nm (Rayleigh scattering).</p>
Full article ">Figure 6
<p>Comparison of original 11B-Untreated Cells (<b>a</b>) and 11B-Treated Cells (<b>b</b>) images and denoised images after ZS-DeconvNet obtained using laser excitation at 660 nm and AOTF at 817 nm (Raman scattering).</p>
Full article ">Figure 7
<p>Comparison of original 11B-Untreated Cells (<b>a</b>) and 11B-Treated Cells (<b>b</b>) images and denoised images after ZS-DeconvNet obtained using laser excitation at 660 nm and AOTF at 694 nm (fluorescence).</p>
Full article ">Figure 8
<p>Comparison of original 11B-Untreated Cells (<b>a</b>) and 11B-Treated Cells (<b>b</b>) images and denoised images after ZS-DeconvNet obtained using laser excitation at 785 nm and AOTF at 775 nm (Rayleigh scattering).</p>
Full article ">Figure 9
<p>Denoising performance of ZS-DeconvNet on 11B-Untreated cells images using 660 nm laser for Rayleigh spectra: original image, denoised image after ZS-DeconvNet, FRC analysis and denoising metrics (PSNR, SSIM and RMSE).</p>
Full article ">Figure 9 Cont.
<p>Denoising performance of ZS-DeconvNet on 11B-Untreated cells images using 660 nm laser for Rayleigh spectra: original image, denoised image after ZS-DeconvNet, FRC analysis and denoising metrics (PSNR, SSIM and RMSE).</p>
Full article ">
Back to TopTop