Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (463)

Search Parameters:
Keywords = label-free imaging

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 5537 KiB  
Article
Accompanying Hemoglobin Polymerization in Red Blood Cells in Patients with Sickle Cell Disease Using Fluorescence Lifetime Imaging
by Fernanda Aparecida Borges da Silva, João Batista Florindo, Amilcar Castro de Mattos, Fernando Ferreira Costa, Irene Lorand-Metze and Konradin Metze
Int. J. Mol. Sci. 2024, 25(22), 12290; https://doi.org/10.3390/ijms252212290 - 15 Nov 2024
Viewed by 457
Abstract
In recent studies, it has been shown that fluorescence lifetime imaging (FLIM) may reveal intracellular structural details in unstained cytological preparations that are not revealed by standard staining procedures. The aim of our investigation was to examine whether FLIM images could reveal areas [...] Read more.
In recent studies, it has been shown that fluorescence lifetime imaging (FLIM) may reveal intracellular structural details in unstained cytological preparations that are not revealed by standard staining procedures. The aim of our investigation was to examine whether FLIM images could reveal areas suggestive of polymerization in red blood cells (RBCs) of sickle cell disease (SCD) patients. We examined label-free blood films using auto-fluorescence FLIM images of 45 SCD patients and compared the results with those of 27 control persons without hematological disease. All control RBCs revealed homogeneous cytoplasm without any foci. Rounded non-sickled RBCs in SCD showed between zero and three small intensively fluorescent dots with higher lifetime values. In sickled RBCs, we found additionally larger irregularly shaped intensively fluorescent areas with increased FLIM values. These areas were interpreted as equivalent to polymerized hemoglobin. The rounded, non-sickled RBCs of SCD patients with homogeneous cytoplasm were not different from those of the erythrocytes of control patients in light microscopy. Yet, variables from the local binary pattern-transformed matrix of the FLIM values per pixel showed significant differences between non-sickled RBCs and those of control cells. In a linear discriminant analysis, using local binary pattern-transformed texture features (mean and entropy) of the erythrocyte cytoplasm of normal appearing cells, the final model could distinguish between SCD patients and control persons with an accuracy of 84.7% of the patients. When the classification was based on the examination of a single rounded erythrocyte, an accuracy of 68.5% was achieved. Employing the Linear Discriminant Analysis classifier method for machine learning, the accuracy was 68.1%. We believe that our study shows that FLIM is able to disclose the topography of the intracellular polymerization process of hemoglobin in sickle cell disease and that the images are compatible with the theory of the two-step nucleation. Furthermore, we think that the presented technique may be an interesting tool for the investigation of therapeutic inhibition of polymerization. Full article
Show Figures

Figure 1

Figure 1
<p>Image of a peripheral blood film of a control case with several normal RBCs. Upper left: auto-fluorescence picture. Upper middle: fluorescence lifetime image: the blue color corresponds to the lifetime of hemoglobin. Surrounding plasma in green/yellow color corresponding to a higher lifetime. A cursor is placed on a RBC (right inferior corner). Upper right: histogram of the lifetime distribution of the image (pseudo-colors according to the rainbow spectrum). Blue represents the shortest lifetime and red is the longest. The histogram shows that hemoglobin has a short lifetime. Below is the fluorescence lifetime decay curve of the selected spot in the image. Every dot represents a single photon.</p>
Full article ">Figure 2
<p>Upper left: autofluorescence and FLIM images of a patient with homozygous SS hemoglobinopathy. Two normal-shaped and one sickled RBC. Each of the normal looking ones shows one highly fluorescent dot. The sickled RBC has areas with a higher fluorescence suggestive of polymerization. The histogram on the right side represents the lifetime of the region where the cursor is placed. Lower right is the decay curve of the selected region of interest.</p>
Full article ">Figure 3
<p>RBCs from a blood smear of a patient with SC hemoglobinopathy. Left: auto-fluorescence and right FLIM image. In the center, a sickled cell with an irregular heterogeneous area of enhanced fluorescence revealing a higher lifetime value in the FLIM compared to the surrounding cytoplasm. Some of the non-sickled RBCs show highly fluorescent dots.</p>
Full article ">Figure 4
<p>RBCs from a blood smear of S beta-thalassemia hemoglobinopathy. Three entire sickled cells with irregular, sometimes heterogeneous areas with enhanced fluorescence revealing higher lifetime values compared to the surrounding cytoplasm. Some of the non-sickled RBCs show highly fluorescent dots.</p>
Full article ">Figure 5
<p>Distribution of non-sickled cells in patients with SCD according to the sub-types: SS in black, SC in green, and S thalassemia in orange. There were no significant differences among the different sub-types of SCD.</p>
Full article ">Figure 6
<p>Dot plot showing the distribution of LBP mean (Y axis) and LBP entropy (X axis) to show the distribution of each cell in the control group (blue), and SCD: non-sickled cells are in green and sickled cells are in orange. Both parameters were able to discriminate between normal and SCD in 84.7% of the cases.</p>
Full article ">
19 pages, 8697 KiB  
Review
In Situ and Label-Free Quantification of Membrane Protein–Ligand Interactions Using Optical Imaging Techniques: A Review
by Caixin Huang, Jingbo Zhang, Zhaoyang Liu, Jiying Xu, Ying Zhao and Pengfei Zhang
Biosensors 2024, 14(11), 537; https://doi.org/10.3390/bios14110537 - 6 Nov 2024
Viewed by 553
Abstract
Membrane proteins are crucial for various cellular processes and are key targets in pharmacological research. Their interactions with ligands are essential for elucidating cellular mechanisms and advancing drug development. To study these interactions without altering their functional properties in native environments, several advanced [...] Read more.
Membrane proteins are crucial for various cellular processes and are key targets in pharmacological research. Their interactions with ligands are essential for elucidating cellular mechanisms and advancing drug development. To study these interactions without altering their functional properties in native environments, several advanced optical imaging methods have been developed for in situ and label-free quantification. This review focuses on recent optical imaging techniques such as surface plasmon resonance imaging (SPRi), surface plasmon resonance microscopy (SPRM), edge tracking approaches, and surface light scattering microscopy (SLSM). We explore the operational principles, recent advancements, and the scope of application of these methods. Additionally, we address the current challenges and explore the future potential of these innovative optical imaging strategies in deepening our understanding of biomolecular interactions and facilitating the discovery of new therapeutic agents. Full article
(This article belongs to the Special Issue Feature Paper in Biosensor and Bioelectronic Devices 2024)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of SPRi for in situ analysis of membrane protein–ligand interactions, reprinted with permission from Ref. [<a href="#B26-biosensors-14-00537" class="html-bibr">26</a>].</p>
Full article ">Figure 2
<p>Schematic diagram of SPRM. (<b>a</b>) Schematic illustration of the experimental set-up. (<b>b</b>) The entire cell bottom membrane and part of the cell top membrane in the cell edge regions are located within the typical detection depth of the SPRM. From the bottom up, examples of bright-field, fluorescence and SPR images, respectively, reprinted with permission from Ref. [<a href="#B30-biosensors-14-00537" class="html-bibr">30</a>].</p>
Full article ">Figure 3
<p>Schematic diagram of PEIM. (<b>a</b>) Schematic of the experimental setup. (<b>b</b>) Examples of optical, SPR and EIM images of 200 nm silica nanoparticles, demonstrating the spatial resolution of the systems, reprinted with permission from Ref. [<a href="#B28-biosensors-14-00537" class="html-bibr">28</a>].</p>
Full article ">Figure 4
<p>Schematic of edge tracking approach. (<b>A</b>) Schematic illustration of the experimental setup based on an inverted phase-contrast microscope with a 40× phase objective. (<b>B</b>) Differential optical detection for accurate tracking of cell edge changes induced by analyte–receptor interaction. (<b>C</b>) Schematic of a typical binding curve as determined from the cell edge movement. (<b>D</b>) The root mean square of the fixed cell edge change is 0.46 nm. (<b>E</b>) Illustration of cell edge changes over time during the binding process where i, ii, and iii correspond to the stages marked in (<b>C</b>). Blue and red rectangles in (<b>B</b>,<b>E</b>) are the ROIs for differential detection, reprinted with permission from Ref. [<a href="#B34-biosensors-14-00537" class="html-bibr">34</a>].</p>
Full article ">Figure 5
<p>Comparison of SPRM with PSM. (<b>A</b>) Simplified sketch of the optical setup for SPRM, and SPRM image of one 100 nm polystyrene nanoparticle. (<b>B</b>–<b>D</b>) Bright field and SPRM images of fixed A431, HeLa, and RBL-2H3 cells. (<b>E</b>) Simplified sketch of the optical setup for PSM, and PSM image of one 100 nm PSNP. (<b>F</b>–<b>H</b>) Bright field and PSM images of fixed A431, HeLa, and RBL-2H3 cells, reprinted with permission from Ref. [<a href="#B45-biosensors-14-00537" class="html-bibr">45</a>].</p>
Full article ">Figure 6
<p>In situ analysis of EGFR–antibody interactions. (<b>a</b>) A typical SPRi image of a few tens of A431 cells adhered on the gold-coated glass slide. (<b>b</b>) Differential SPR image shows the maximum SPR intensity increase due to anti-EGFR binding to the surface of A431 cells. (<b>c</b>) The average SPR sensorgrams of all cells in view (black curves, average SPR sensorgram; red curve, curve fitting; gray background, cell-to-cell variation) and the surrounding regions without cell coverage (blue curve). (<b>d</b>) The SPR sensorgrams of five individual cells of different regions (gray dotted curves, individual SPR sensorgram; colored curve, corresponding fitting curves of colored circles marked area in (<b>a</b>)), reprinted with permission from Ref. [<a href="#B26-biosensors-14-00537" class="html-bibr">26</a>].</p>
Full article ">Figure 7
<p>Single-cell analysis with SPRM. (<b>a</b>,<b>b</b>) The bright-field and SPRM images of a SH-EP1 cell. (<b>c</b>) The epifluorescence image of the same cell stained with Alexa Fluor 555-labeled WGA with a focus on the bottom cell membrane portion (white arrows indicate the borderline between the thick cell body (in the centre) and the thin cell membrane (at the edge)). (<b>d</b>) SPR sensorgrams of the entire cell region (black curve), cell edge region (red curve) and cell central region (blue curve) during the binding and dissociation of WGA. (<b>e</b>) SPR sensorgrams of the cell edge region (black curves) and global fitting (red curves) with WGA solutions of different concentrations. (<b>f</b>) <span class="html-italic">K<sub>D</sub></span> was determined as 0.32 mM by plotting the concentration-dependent equilibrium responses, reprinted with permission from Ref. [<a href="#B30-biosensors-14-00537" class="html-bibr">30</a>].</p>
Full article ">Figure 8
<p>In situ analysis with edge tracking approach. (<b>A</b>) different concentrations of WGA, (<b>B</b>) different concentrations of RCA, and (<b>C</b>) different lectins (WGA, RCA, PHA, PSA, ConA). (<b>D</b>) Statistical results of associate rate constant, (<b>E</b>) dissociate rate constant, and (<b>F</b>) dissociation constant for three lectins (WGA, RCA, PHA) with obvious binding interaction with red blood cells, reprinted with permission from Ref. [<a href="#B41-biosensors-14-00537" class="html-bibr">41</a>].</p>
Full article ">Figure 9
<p>In situ analysis of small molecules interacting with membrane proteins with edge tracking approaches. (<b>a</b>) WGA binding to glycoprotein on SH-EP1 cells. (<b>b</b>) Acetylcholine binding to nicotinic acetylcholine receptors (ion channel) on SH-EP1-α4β2 cells. (<b>c</b>) AMD3100 binding to CXCR-4 receptors (GPCR) on A549 cells. (<b>d</b>) Insulin binding to insulin receptors (tyrosine kinase receptor) on Hep G2 cells, reprinted with permission from Ref. [<a href="#B39-biosensors-14-00537" class="html-bibr">39</a>].</p>
Full article ">Figure 10
<p>In situ analysis of small molecules interacting with membrane proteins with ESM. (<b>a</b>,<b>d</b>) Bright field, ESM images, and spring constant map of the A431 cell interacting with 300 nM (<b>a</b>), and 900 nM (<b>d</b>) erlotinib. (<b>b</b>,<b>e</b>) Image intensity variation against time during the association and dissociation phases for the A431 cell shown in (<b>a</b>,<b>d</b>). The association phase was achieved during flowing the erlotinib solution, and the dissociation phase was achieved during flowing the live cell imaging solution. (<b>c</b>,<b>f</b>) Spring constant variation against time during the association and dissociation phases for the A431 cell shown in (<b>a</b>,<b>d</b>), reprinted with permission from Ref. [<a href="#B43-biosensors-14-00537" class="html-bibr">43</a>].</p>
Full article ">Figure 11
<p>High throughput and in situ analysis of lectin interacting with membrane proteins with ESM. (<b>A</b>) Bright field and ESM image of live A431 cells. (<b>B</b>) The image intensity variation against time achieved by averaging the signal of all cells within the field of view. (<b>C</b>) Zoomed views of marked region at 0 s and 216 s after changing the flow to WGA solution, and the differential image. (<b>D</b>) The image intensity variation against time achieved from the cell in the marked zone in (<b>A</b>). (<b>E</b>–<b>H</b>) Statistical distributions of association rate constant, dissociation rate constant, dissociation constant and maximum response value in the binding curves achieved from the individual cells, reprinted with permission from Ref. [<a href="#B44-biosensors-14-00537" class="html-bibr">44</a>].</p>
Full article ">Figure 12
<p>High throughput and in situ analysis of small-molecule ligands interacting with membrane proteins with ESM. (<b>A</b>) Bright field image and spring constant map of live A431 cells. (<b>B</b>) The spring constant variation against time achieved by averaging the signal of all cells within the field of view. (<b>C</b>) Zoomed views of marked region at 0 s and 220 s after changing the flow to 1 μM erlotinib, and the differential image. (<b>D</b>) The spring constants variation against time achieved from the cell in marked zone in (<b>A</b>). (<b>E</b>–<b>H</b>) Statistical distributions of the association rate constant, dissociation rate constant, dissociation constant and maximum response value in the binding curves, reprinted with permission from Ref. [<a href="#B44-biosensors-14-00537" class="html-bibr">44</a>].</p>
Full article ">
16 pages, 5991 KiB  
Article
Advanced Imaging Integration: Multi-Modal Raman Light Sheet Microscopy Combined with Zero-Shot Learning for Denoising and Super-Resolution
by Pooja Kumari, Shaun Keck, Emma Sohn, Johann Kern and Matthias Raedle
Sensors 2024, 24(21), 7083; https://doi.org/10.3390/s24217083 - 3 Nov 2024
Viewed by 887
Abstract
This study presents an advanced integration of Multi-modal Raman Light Sheet Microscopy with zero-shot learning-based computational methods to significantly enhance the resolution and analysis of complex three-dimensional biological structures, such as 3D cell cultures and spheroids. The Multi-modal Raman Light Sheet Microscopy system [...] Read more.
This study presents an advanced integration of Multi-modal Raman Light Sheet Microscopy with zero-shot learning-based computational methods to significantly enhance the resolution and analysis of complex three-dimensional biological structures, such as 3D cell cultures and spheroids. The Multi-modal Raman Light Sheet Microscopy system incorporates Rayleigh scattering, Raman scattering, and fluorescence detection, enabling comprehensive, marker-free imaging of cellular architecture. These diverse modalities offer detailed spatial and molecular insights into cellular organization and interactions, critical for applications in biomedical research, drug discovery, and histological studies. To improve image quality without altering or introducing new biological information, we apply Zero-Shot Deconvolution Networks (ZS-DeconvNet), a deep-learning-based method that enhances resolution in an unsupervised manner. ZS-DeconvNet significantly refines image clarity and sharpness across multiple microscopy modalities without requiring large, labeled datasets, or introducing artifacts. By combining the strengths of multi-modal light sheet microscopy and ZS-DeconvNet, we achieve improved visualization of subcellular structures, offering clearer and more detailed representations of existing data. This approach holds significant potential for advancing high-resolution imaging in biomedical research and other related fields. Full article
Show Figures

Figure 1

Figure 1
<p>Principle of light sheet microscopy. Excitation and collection axes are orthogonally oriented with the sample placed at their intersection. A laser beam is shaped into a sheet and illuminates a thin section of the sample in the focal plane of the detection objective. The objective images the plane onto a camera chip [<a href="#B2-sensors-24-07083" class="html-bibr">2</a>].</p>
Full article ">Figure 2
<p>(<b>a</b>) Isometric view of the Raman light sheet microscope CAD model with connected sCMOS camera. The colored lines indicate the optical path of the illuminating lasers. Red: 660 nm beam propagation. Green: 785 nm beam propagation. Blue coaxial superimposed 660 nm and 785 nm beam propagation [<a href="#B2-sensors-24-07083" class="html-bibr">2</a>]. (<b>b</b>) Sample chamber (24), where sample is placed.</p>
Full article ">Figure 3
<p>(<b>a</b>) CAD model of a multi-view sample carrier and corresponding frame for embedding spheroid samples in hydrogels. This sample is located in the sample chamber shown in <a href="#sensors-24-07083-f002" class="html-fig">Figure 2</a>b. (<b>b</b>) Sample holder system consisting of gel chamber with cylindrical extension, casting frame and negative mold for precise, reproducible embedding of spheroids in hydrogel.</p>
Full article ">Figure 4
<p>(<b>a</b>) The Zero-Shot Deconvolution Network (ZS-DeconvNet) architecture outlines the training workflow, encompassing pre-processing steps—such as corrupted image generation and median filter-based denoising—as well as post-processing techniques, including region-of-interest (ROI) image enhancement and morphological operations. The network’s performance is assessed using PSNR, SSIM, and RMSE metrics to achieve enhanced image quality in Raman light sheet microscopy. (<b>b</b>,<b>c</b>) represent the input (<b>b</b>) and output (<b>c</b>) of the ZS-DeconvNet architecture, as depicted in (<b>a</b>).</p>
Full article ">Figure 5
<p>Comparison of original 11B-Untreated (<b>a</b>) and 11B-Treated Cells (<b>b</b>) images and denoised images after ZS-DeconvNet obtained using laser excitation at 660 nm and AOTF at 650 nm (Rayleigh scattering).</p>
Full article ">Figure 6
<p>Comparison of original 11B-Untreated Cells (<b>a</b>) and 11B-Treated Cells (<b>b</b>) images and denoised images after ZS-DeconvNet obtained using laser excitation at 660 nm and AOTF at 817 nm (Raman scattering).</p>
Full article ">Figure 7
<p>Comparison of original 11B-Untreated Cells (<b>a</b>) and 11B-Treated Cells (<b>b</b>) images and denoised images after ZS-DeconvNet obtained using laser excitation at 660 nm and AOTF at 694 nm (fluorescence).</p>
Full article ">Figure 8
<p>Comparison of original 11B-Untreated Cells (<b>a</b>) and 11B-Treated Cells (<b>b</b>) images and denoised images after ZS-DeconvNet obtained using laser excitation at 785 nm and AOTF at 775 nm (Rayleigh scattering).</p>
Full article ">Figure 9
<p>Denoising performance of ZS-DeconvNet on 11B-Untreated cells images using 660 nm laser for Rayleigh spectra: original image, denoised image after ZS-DeconvNet, FRC analysis and denoising metrics (PSNR, SSIM and RMSE).</p>
Full article ">Figure 9 Cont.
<p>Denoising performance of ZS-DeconvNet on 11B-Untreated cells images using 660 nm laser for Rayleigh spectra: original image, denoised image after ZS-DeconvNet, FRC analysis and denoising metrics (PSNR, SSIM and RMSE).</p>
Full article ">
23 pages, 4829 KiB  
Review
The Evolution of Artificial Intelligence in Medical Imaging: From Computer Science to Machine and Deep Learning
by Michele Avanzo, Joseph Stancanello, Giovanni Pirrone, Annalisa Drigo and Alessandra Retico
Cancers 2024, 16(21), 3702; https://doi.org/10.3390/cancers16213702 - 1 Nov 2024
Viewed by 1111
Abstract
Artificial intelligence (AI), the wide spectrum of technologies aiming to give machines or computers the ability to perform human-like cognitive functions, began in the 1940s with the first abstract models of intelligent machines. Soon after, in the 1950s and 1960s, machine learning algorithms [...] Read more.
Artificial intelligence (AI), the wide spectrum of technologies aiming to give machines or computers the ability to perform human-like cognitive functions, began in the 1940s with the first abstract models of intelligent machines. Soon after, in the 1950s and 1960s, machine learning algorithms such as neural networks and decision trees ignited significant enthusiasm. More recent advancements include the refinement of learning algorithms, the development of convolutional neural networks to efficiently analyze images, and methods to synthesize new images. This renewed enthusiasm was also due to the increase in computational power with graphical processing units and the availability of large digital databases to be mined by neural networks. AI soon began to be applied in medicine, first through expert systems designed to support the clinician’s decision and later with neural networks for the detection, classification, or segmentation of malignant lesions in medical images. A recent prospective clinical trial demonstrated the non-inferiority of AI alone compared with a double reading by two radiologists on screening mammography. Natural language processing, recurrent neural networks, transformers, and generative models have both improved the capabilities of making an automated reading of medical images and moved AI to new domains, including the text analysis of electronic health records, image self-labeling, and self-reporting. The availability of open-source and free libraries, as well as powerful computing resources, has greatly facilitated the adoption of deep learning by researchers and clinicians. Key concerns surrounding AI in healthcare include the need for clinical trials to demonstrate efficacy, the perception of AI tools as ‘black boxes’ that require greater interpretability and explainability, and ethical issues related to ensuring fairness and trustworthiness in AI systems. Thanks to its versatility and impressive results, AI is one of the most promising resources for frontier research and applications in medicine, in particular for oncological applications. Full article
(This article belongs to the Section Cancer Informatics and Big Data)
Show Figures

Figure 1

Figure 1
<p>Alan Turing at age 16 (<b>a</b>). Source: Archive Centre, King’s College, Cambridge. The Papers of Alan Turing, AMT/K/7/4. The same image after applying a Sobel filter in the x (<b>b</b>) and y (<b>c</b>) direction.</p>
Full article ">Figure 2
<p>Timeline of AI (orange) and of AI in medicine (blue).</p>
Full article ">Figure 3
<p>Scheme of the perceptron.</p>
Full article ">Figure 4
<p>Application of decision trees (<b>a</b>) and support vector machines (<b>b</b>) learning to the classification of iris flower species from petal width and length. Prediction (areas) and training data (dots) and the resulting decision tree are shown on the left and right sides, respectively.</p>
Full article ">Figure 5
<p>Comparison between single-layer (<b>a</b>) and multilayer ANNs (<b>b</b>).</p>
Full article ">
14 pages, 21265 KiB  
Article
Label-Free Optical Transmission Tomography for Direct Mycological Examination and Monitoring of Intracellular Dynamics
by Eliott Teston, Marc Sautour, Léa Boulnois, Nicolas Augey, Abdellah Dighab, Christophe Guillet, Dea Garcia-Hermoso, Fanny Lanternier, Marie-Elisabeth Bougnoux, Frédéric Dalle, Louise Basmaciyan, Mathieu Blot, Pierre-Emmanuel Charles, Jean-Pierre Quenot, Bianca Podac, Catherine Neuwirth, Claude Boccara, Martine Boccara, Olivier Thouvenin and Thomas Maldiney
J. Fungi 2024, 10(11), 741; https://doi.org/10.3390/jof10110741 - 26 Oct 2024
Viewed by 651
Abstract
Live-cell imaging generally requires pretreatment with fluorophores to either monitor cellular functions or the dynamics of intracellular processes and structures. We have recently introduced full-field optical coherence tomography for the label-free live-cell imaging of fungi with potential clinical applications for the diagnosis of [...] Read more.
Live-cell imaging generally requires pretreatment with fluorophores to either monitor cellular functions or the dynamics of intracellular processes and structures. We have recently introduced full-field optical coherence tomography for the label-free live-cell imaging of fungi with potential clinical applications for the diagnosis of invasive fungal mold infections. While both the spatial resolution and technical set up of this technology are more likely designed for the histopathological analysis of tissue biopsies, there is to our knowledge no previous work reporting the use of a light interference-based optical technique for direct mycological examination and monitoring of intracellular processes. We describe the first application of dynamic full-field optical transmission tomography (D-FF-OTT) to achieve both high-resolution and live-cell imaging of fungi. First, D-FF-OTT allowed for the precise examination and identification of several elementary structures within a selection of fungal species commonly known to be responsible for invasive fungal infections such as Candida albicans, Aspergillus fumigatus, or Rhizopus arrhizus. Furthermore, D-FF-OTT revealed the intracellular trafficking of organelles and vesicles related to metabolic processes of living fungi, thus opening new perspectives in fast fungal infection diagnostics. Full article
(This article belongs to the Special Issue Diagnosis of Invasive Fungal Diseases)
Show Figures

Figure 1

Figure 1
<p>Illustration (<b>a</b>) and schematic representation of the prototype OTT system (<b>b</b>), focus on the objective area of the microscope (<b>c</b>), and a schematic explanation of signal treatment for dynamic imaging reconstruction (<b>d</b>) during <span class="html-italic">Candida albicans</span> imaging. The computer window in (<b>b</b>) shows both structural greyscale (left) and dynamic (right) modes. (Adapted from Ref [<a href="#B17-jof-10-00741" class="html-bibr">17</a>]).</p>
Full article ">Figure 2
<p>Bright-field and D-FF-OTT imaging of <span class="html-italic">Candida albicans</span> (<b>a</b>–<b>d</b>) and <span class="html-italic">Candida parapsilosis</span> (<b>e</b>–<b>h</b>). Bright-field imaging of <span class="html-italic">Candida albicans</span> (<b>a</b>) and <span class="html-italic">Candida parapsilosis</span> (<b>e</b>). FF-OTT (<b>b</b>,<b>f</b>) and D-FF-OTT (<b>c</b>,<b>d</b>,<b>g</b>,<b>h</b>) of <span class="html-italic">Candida albicans</span> (upper row) and parapsilosis (lower row). Dashed squares in <a href="#jof-10-00741-f002" class="html-fig">Figure 2</a> (<b>c</b>,<b>g</b>) are shown with a higher magnification in (<b>d</b>,<b>h</b>). White arrows indicate cell membranes, empty triangles indicate nuclear membranes, and white-filled arrowheads point out organelles. Scale bars represent 10 μm.</p>
Full article ">Figure 3
<p>Bright-field and D-FF-OTT imaging of <span class="html-italic">Aspergillus fumigatus</span>. Bright-field imaging of unstained (<b>a</b>,<b>e</b>) and lactophenol blue-stained (<b>b</b>,<b>f</b>) fungi. FF-OTT (<b>c</b>,<b>g</b>) and D-FF-OTT (<b>d</b>,<b>h</b>) of similar structures. White arrowheads point out septa. Scale bars represent 10 μm.</p>
Full article ">Figure 4
<p>Bright-field and FF-OTT imaging of <span class="html-italic">Aspergillus niger</span>. Bright-field imaging of unstained (<b>a</b>,<b>e</b>) and lactophenol blue-stained (<b>b</b>,<b>f</b>) fungi. FF-OTT (<b>c</b>,<b>g</b>) and D-FF-OTT (<b>d</b>,<b>h</b>) of similar structures. White arrowheads point out septa. Scale bars represent 10 μm.</p>
Full article ">Figure 5
<p>Optical and FF-OTT imaging of <span class="html-italic">Rhizopus arrhizus</span>. Optical imaging of unstained (<b>a</b>,<b>e</b>) and lactophenol blue-stained (<b>b</b>,<b>f</b>) fungi. FF-OTT (<b>c</b>,<b>g</b>) and D-FF-OTT (<b>d</b>,<b>h</b>) of similar structures. White arrowheads indicate sporangiospores. Scale bars represent 10 μm.</p>
Full article ">Figure 6
<p>Quantitative spectral analysis of cellular structure of <span class="html-italic">Candida albicans</span>. D-FF-OTT registers three spectral parameters that are the central fluctuation frequency (<b>a</b>,<b>b</b>), frequency range (<b>c</b>,<b>d</b>), and amplitude of fluctuations (<b>e</b>,<b>f</b>). Images were analyzed and signals were quantified for each spectral characteristic for three distinct structures, namely the plasma membranes (blue circles), nuclear membranes (orange squares), and lipid droplets (purple triangles). Statistical analysis showed a significant difference (<span class="html-italic">p</span> &lt; 0.0001) for any comparison except for the comparison between the amplitude of fluctuations of plasma and nuclear membranes, as indicated on corresponding graph (ns in (<b>f</b>)). White arrows indicate cell membranes, empty triangles indicate nuclear membranes, and white-filled arrowheads point out organelles. Scale bars represent 10 μm.</p>
Full article ">
17 pages, 5605 KiB  
Review
Imaging of Live Cells by Digital Holographic Microscopy
by Emilia Mitkova Mihaylova
Photonics 2024, 11(10), 980; https://doi.org/10.3390/photonics11100980 - 18 Oct 2024
Viewed by 693
Abstract
Imaging of microscopic objects is of fundamental importance, especially in life sciences. Recent fast progress in electronic detection and control, numerical computation, and digital image processing, has been crucial in advancing modern microscopy. Digital holography is a new field in three-dimensional imaging. Digital [...] Read more.
Imaging of microscopic objects is of fundamental importance, especially in life sciences. Recent fast progress in electronic detection and control, numerical computation, and digital image processing, has been crucial in advancing modern microscopy. Digital holography is a new field in three-dimensional imaging. Digital reconstruction of a hologram offers the remarkable capability to refocus at different depths inside a transparent or semi-transparent object. Thus, this technique is very suitable for biological cell studies in vivo and could have many biomedical and biological applications. A comprehensive review of the research carried out in the area of digital holographic microscopy (DHM) for live-cell imaging is presented. The novel microscopic technique is non-destructive and label-free and offers unmatched imaging capabilities for biological and bio-medical applications. It is also suitable for imaging and modelling of key metabolic processes in living cells, microbial communities or multicellular plant tissues. Live-cell imaging by DHM allows investigation of the dynamic processes underlying the function and morphology of cells. Future applications of DHM can include real-time cell monitoring in response to clinically relevant compounds. The effect of drugs on migration, proliferation, and apoptosis of abnormal cells is an emerging field of this novel microscopic technique. Full article
Show Figures

Figure 1

Figure 1
<p>Interference on the screen of a CCD camera of a plane reference wave R(x,y) and an object wave O(x,y).</p>
Full article ">Figure 2
<p>Optical set-up of a digital in-line holographic microscope.</p>
Full article ">Figure 3
<p>Basic schematic of a digital holographic microscope based on a Match-Zehnder interferometric configuration.</p>
Full article ">Figure 4
<p>Images of (<b>a</b>) digital hologram of algae <span class="html-italic">Pseudokirchneriella subcapitata</span>; (<b>b</b>–<b>d</b>) the reconstructed intensities at four consecutive planes. The distance between the planes changes by 2 μm.</p>
Full article ">Figure 5
<p>Images of algae <span class="html-italic">Tetraselmis suecica</span>: (<b>a</b>,<b>c</b>,<b>e</b>) digital holograms; (<b>b</b>,<b>d</b>,<b>f</b>) the wave front intensities of the corresponding images. Cell size is 10.3 μm ± 9.5%.</p>
Full article ">Figure 6
<p>Healthy, fresh human erythrocytes as captured using digital holographic microscopy. The cells are 2–3 μm thick (reprinted from [<a href="#B33-photonics-11-00980" class="html-bibr">33</a>]).</p>
Full article ">Figure 7
<p>Determination of the refractive index of stenotic and non-stenotic intestinal tissue of Crohn’s disease patients using digital holographic microscopy (DHM). Histological evaluation of HE-staining and the corresponding quantitative DHM phase contrast image show certain fibrotic changes of the submucosal layer of stenotic (<b>C</b>,<b>D</b>) compared to non-stenotic bowel tissue (<b>A</b>,<b>B</b>) (reprinted from [<a href="#B38-photonics-11-00980" class="html-bibr">38</a>]).</p>
Full article ">Figure 8
<p>Lund human mesencephalic neurons (LUHMES), which have been induced to differentiate, can be analyzed for area and optical thickness. (<b>A</b>) represents cells before the differentiation process has started, while (<b>B</b>) represents cells at the end of the differentiation process. The y-axis represents the peak thickness of the cells while the x-axis represents the area in μm<sup>2</sup> of each individual object segmented in the image. Each square represents one cell. (<b>C</b>) shows the cells before the differentiation process started while (<b>D</b>) shows the cells at the end of the differentiation process (reprinted from [<a href="#B33-photonics-11-00980" class="html-bibr">33</a>]).</p>
Full article ">Figure 9
<p>Images of cell suspension culture A: (<b>a</b>) digital hologram; (<b>b</b>) the numerically reconstructed wave front intensity of (<b>a</b>).</p>
Full article ">Figure 10
<p>Images of cell suspension culture D: (<b>a</b>) digital hologram; (<b>b</b>) the numerically reconstructed wave front intensity of (<b>a</b>).</p>
Full article ">Figure 11
<p>Images of cell suspension culture MSD: (<b>a</b>) digital hologram; (<b>b</b>) the numerically reconstructed wave front intensity of (<b>a</b>).</p>
Full article ">Figure 12
<p>Examples of phase images of HeLa, A549 and 3T3 cells in three states: live, apoptotic and necrotic, obtained using digital holography (reprinted from [<a href="#B51-photonics-11-00980" class="html-bibr">51</a>]).</p>
Full article ">Figure 13
<p>Measurement of the spatial phase sensitivity of QPM for direct laser and pseudo-thermal light sources. (<b>a</b>,<b>d</b>) are the interferograms obtained with healthy sperm cell as a test specimen, (<b>b</b>,<b>e</b>) reconstructed phase map of the sperm cell corresponding to (<b>a</b>,<b>d</b>), respectively and (<b>c</b>,<b>f</b>) spatial phase noise of the experimental setup for laser and pseudo-thermal light sources, respectively. Note that the scale of the color bars used in (<b>c</b>,<b>f</b>) having different values (reprinted from [<a href="#B55-photonics-11-00980" class="html-bibr">55</a>]).</p>
Full article ">Figure 14
<p>3D pseudo-coloured phase plots of HeLa cells obtained before PDT (<b>a</b>,<b>c</b>) and 60 min after irradiation at 22.1 mW/cm<sup>2</sup> (<b>b</b>) and 93 mW/cm<sup>2</sup> (<b>d</b>) (reprinted from [<a href="#B57-photonics-11-00980" class="html-bibr">57</a>]).</p>
Full article ">
11 pages, 978 KiB  
Article
Estimating Progression-Free Survival in Patients with Primary High-Grade Glioma Using Machine Learning
by Agnieszka Kwiatkowska-Miernik, Piotr Gustaw Wasilewski, Bartosz Mruk, Katarzyna Sklinda, Maciej Bujko and Jerzy Walecki
J. Clin. Med. 2024, 13(20), 6172; https://doi.org/10.3390/jcm13206172 - 16 Oct 2024
Viewed by 876
Abstract
Background/Objectives: High-grade gliomas are the most common primary malignant brain tumors in adults. These neoplasms remain predominantly incurable due to the genetic diversity within each tumor, leading to varied responses to specific drug therapies. With the advent of new targeted and immune [...] Read more.
Background/Objectives: High-grade gliomas are the most common primary malignant brain tumors in adults. These neoplasms remain predominantly incurable due to the genetic diversity within each tumor, leading to varied responses to specific drug therapies. With the advent of new targeted and immune therapies, which have demonstrated promising outcomes in clinical trials, there is a growing need for image-based techniques to enable early prediction of treatment response. This study aimed to evaluate the potential of radiomics and artificial intelligence implementation in predicting progression-free survival (PFS) in patients with highest-grade glioma (CNS WHO 4) undergoing a standard treatment plan. Methods: In this retrospective study, prediction models were developed in a cohort of 51 patients with pathologically confirmed highest-grade glioma (CNS WHO 4) from the authors’ institution and the repository of the Cancer Imaging Archive (TCIA). Only patients with confirmed recurrence after complete tumor resection with adjuvant radiotherapy and chemotherapy with temozolomide were included. For each patient, 109 radiomic features of the tumor were obtained from a preoperative magnetic resonance imaging (MRI) examination. Four clinical features were added manually—sex, weight, age at the time of diagnosis, and the lobe of the brain where the tumor was located. The data label was the time to recurrence, which was determined based on follow-up MRI scans. Artificial intelligence algorithms were built to predict PFS in the training set (n = 75%) and then validate it in the test set (n = 25%). The performance of each model in both the training and test datasets was assessed using mean absolute percentage error (MAPE). Results: In the test set, the random forest model showed the highest predictive performance with 1-MAPE = 92.27% and a C-index of 0.9544. The decision tree, gradient booster, and artificial neural network models showed slightly lower effectiveness with 1-MAPE of 88.31%, 80.21%, and 91.29%, respectively. Conclusions: Four of the six models built gave satisfactory results. These results show that artificial intelligence models combined with radiomic features could be useful for predicting the progression-free survival of high-grade glioma patients. This could be beneficial for risk stratification of patients, enhancing the potential for personalized treatment plans and improving overall survival. Further investigation is necessary with an expanded sample size and external multicenter validation. Full article
Show Figures

Figure 1

Figure 1
<p>Assuming that each small square represents a pixel, the morphological and first-order features of images (<b>A</b>,<b>B</b>) would be the same, but the images differ in texture.</p>
Full article ">Figure 2
<p>Study flowchart. (<b>a</b>) Magnetic resonance (MR) imaging; the study is based on contrast-enhanced T1—w images. (<b>b</b>) Identification of a region of interest (ROI) and semi-automatic image segmentation. (<b>c</b>) Normalization and radiomic feature extraction from the defined ROI; 109 radiomic features were obtained in the study. (<b>d</b>) Data preprocessing and analysis; five different machine learning (ML) models were trained on the received data (AI—artificial intelligence, DL—deep learning). (<b>e</b>) Results.</p>
Full article ">Figure 3
<p>Flowchart of the patient selection process.</p>
Full article ">Figure 4
<p>Glioma CNS WHO 4 in the left parietal lobe. T1-weighted image after administration of contrast agent; the blue color was used to mark the tumor segmented by the semi-automated method.</p>
Full article ">Figure 5
<p>Kaplan–Meier curve of PFS for patients in the study group.</p>
Full article ">Figure 6
<p>Performance of the five models for predicting the PFS presented using 1-MAPE.</p>
Full article ">Figure 7
<p>Kaplan–Meier curve of predicted PFS for the test set by the random forest model marked in blue and Kaplan–Meier curve of PFS for patients in the study group marked in orange.</p>
Full article ">
18 pages, 7213 KiB  
Review
A Review of Non-Linear Optical Imaging Techniques for Cancer Detection
by Francisco J. Ávila
Optics 2024, 5(4), 416-433; https://doi.org/10.3390/opt5040031 - 16 Oct 2024
Viewed by 705
Abstract
The World Health Organization (WHO) cancer agency predicts that more than 35 million cases of cancer will be experienced in 2050, a 77% increase over the 2022 estimate. Currently, the main cancers diagnosed are breast, lung, and colorectal. There is no standardized tool [...] Read more.
The World Health Organization (WHO) cancer agency predicts that more than 35 million cases of cancer will be experienced in 2050, a 77% increase over the 2022 estimate. Currently, the main cancers diagnosed are breast, lung, and colorectal. There is no standardized tool for cancer diagnoses; initially, clinical procedures are guided by the patient symptoms and usually involve biochemical blood tests, imaging, and biopsy. Label-free non-linear optical approaches are promising tools for tumor imaging, due to their inherent non-invasive biosafe contrast mechanisms and the ability to monitor collagen-related disorders, and biochemical and metabolic changes during cancer progression. In this review, the main non-linear microscopy techniques are discussed, according to three main contrast mechanisms: biochemical, metabolic, and structural imaging. Full article
Show Figures

Figure 1

Figure 1
<p>Raman scattering principle.</p>
Full article ">Figure 2
<p>(<b>a</b>): Raman spectra of HepG2 cell line in lipids (blue), cytoplasm (green), and nucleus (red) regions. (<b>b</b>): Comparison of Raman spectra before (green) and after incubation of HepG2 with PLGA nanoparticles (blue). The pink spectrum corresponds to the isolated PLGA NPs. Image reproduced from [<a href="#B23-optics-05-00031" class="html-bibr">23</a>].</p>
Full article ">Figure 3
<p>SRH images corresponding to a non-Hodgkin lymphoma specimen (<b>A</b>), non-small cell lung cancer brain metastasis (<b>B</b>), and glioblastoma (<b>C</b>) specimens (left column). Heat maps of CNN prediction algorithm (middle column) and overlay of the SRH images with heat maps are shown in the right column. Scale bar: 100 μm. Image reproduced from [<a href="#B30-optics-05-00031" class="html-bibr">30</a>].</p>
Full article ">Figure 4
<p>(<b>a</b>): CARS image of HepG2 cell line showing lipids (red), proteins (green), and nucleic acid (blue) regions. (<b>b</b>): associated CARS spectra. Image reproduced from [<a href="#B41-optics-05-00031" class="html-bibr">41</a>].</p>
Full article ">Figure 5
<p>TPEF image of CAF (<b>middle</b>); monocellular HGT1 (labeled with eGFP) spheroids (<b>left</b>) and bicellular HGT1/CAF (<b>right</b>). Scale bar: 100 μm. Image reproduced from [<a href="#B55-optics-05-00031" class="html-bibr">55</a>].</p>
Full article ">Figure 6
<p>Lifetime TPEF image of a normal breast cell (<b>a</b>) and the normalized fluorescence decay signal (<b>b</b>). I(t), IRF(t)m, and F(t) correspond to the intensity of the decay signal, the impulse response function, and a model fitted, respectively. Reproduced from [<a href="#B68-optics-05-00031" class="html-bibr">68</a>].</p>
Full article ">Figure 7
<p>Illustration of the two-photon excitation fluorescence (<b>a</b>), second harmonic generation (<b>b</b>), and third harmonic generation (<b>c</b>) non-linear processes.</p>
Full article ">Figure 8
<p>Second-order susceptibility values of normal (<b>a</b>) and tumor breast (<b>b</b>) tissues. Scale bar: 20 μm. Images reproduced from [<a href="#B100-optics-05-00031" class="html-bibr">100</a>].</p>
Full article ">Figure 9
<p>THG images of cells isolated from healthy tissue (left column) and for different grades of breast cancer (third to fourth columns). The top (<b>A</b>) and bottom (<b>B</b>) panels compare the morphology of the nucleus and nucleoli, respectively. Red and white arrows indicate the irregular nucleus and nucleoli, respectively. Scale bar: 2 μm. Images reproduced from [<a href="#B109-optics-05-00031" class="html-bibr">109</a>].</p>
Full article ">
16 pages, 2072 KiB  
Review
Chiral, Topological, and Knotted Colloids in Liquid Crystals
by Ye Yuan and Ivan I. Smalyukh
Crystals 2024, 14(10), 885; https://doi.org/10.3390/cryst14100885 - 11 Oct 2024
Viewed by 780
Abstract
The geometric shape, symmetry, and topology of colloidal particles often allow for controlling colloidal phase behavior and physical properties of these soft matter systems. In liquid crystalline dispersions, colloidal particles with low symmetry and nontrivial topology of surface confinement are of particular interest, [...] Read more.
The geometric shape, symmetry, and topology of colloidal particles often allow for controlling colloidal phase behavior and physical properties of these soft matter systems. In liquid crystalline dispersions, colloidal particles with low symmetry and nontrivial topology of surface confinement are of particular interest, including surfaces shaped as handlebodies, spirals, knots, multi-component links, and so on. These types of colloidal surfaces induce topologically nontrivial three-dimensional director field configurations and topological defects. Director switching by electric fields, laser tweezing of defects, and local photo-thermal melting of the liquid crystal host medium promote transformations among many stable and metastable particle-induced director configurations that can be revealed by means of direct label-free three-dimensional nonlinear optical imaging. The interplay between topologies of colloidal surfaces, director fields, and defects is found to show a number of unexpected features, such as knotting and linking of line defects, often uniquely arising from the nonpolar nature of the nematic director field. This review article highlights fascinating examples of new physical behavior arising from the interplay of nematic molecular order and both chiral symmetry and topology of colloidal inclusions within the nematic host. Furthermore, the article concludes with a brief discussion of how these findings may lay the groundwork for new types of topology-dictated self-assembly in soft condensed matter leading to novel mesostructured composite materials, as well as for experimental insights into the pure-math aspects of low-dimensional topology. Full article
(This article belongs to the Special Issue Liquid Crystal Research and Novel Applications in the 21st Century)
Show Figures

Figure 1

Figure 1
<p>Colloids in liquid crystals (LCs). (<b>a</b>) Microscopic structure of a nematic LC with rod-like mesogens, i.e., pentylcyanobiphenyl (5CB). The micrograph (right image) shows the texture of a 5CB droplet observed under a microscope with crossed polarizers, polarization direction marked with white double arrows. Inset shows the chemical structure of 5CB molecules and their collective alignment within a small volume. (<b>b</b>) Topological defects in LCs of different winding numbers; green rods represent LC molecules. (<b>c</b>,<b>d</b>) Homeotropic and planar surface anchoring where LC molecules align perpendicular and parallel to the surface of colloidal inclusions. Orange dots and dashes represent surface functioning agents such as polymer grafting that impose the anchoring direction. (<b>e</b>,<b>f</b>) Micrograhs showing microspheres with homeotropic surface anchoring inducing “hedgehog” point defect and “Saturn ring” line defect; white double arrows indicate the crossed polarizers. (<b>g</b>) Micrographs showing microsphere with planar surface anchoring inducing “boojum” surface defects at the polar points of the sphere. (<b>h</b>–<b>j</b>) Corresponding schematics illustrating LC director field configurations around the colloidal spheres. The black dots and line represent the hedgehog defect, the Saturn ring loop, and the surface boojums, respectively. Schematics are not drawn to scale. The far-field director is shown by the double arrow marked with <b>n</b><sub>0</sub>. Adapted from Ref. [<a href="#B21-crystals-14-00885" class="html-bibr">21</a>].</p>
Full article ">Figure 2
<p>Chiral colloids in LCs. (<b>a</b>) Micrograph of a chiral microstructure obtained by 3D microprinting. (<b>b</b>) Director field distortions around such a particle with planar surface anchoring bound to the confining substrate. The lines in the middle layer and the cylinders show that twist deformation is induced in the director field over the particle despite uniform far-field alignment. (<b>c</b>) Interaction forces vs. distance between the surface-bound chiral structure and a free-floating colloidal sphere. The inset is a micrograph of the interacting objects and the interaction trajectory is color-coded with time. (<b>d</b>,<b>e</b>) Director distributions around right-handed microsprings with a planar surface anchoring at energy-minimized positions. The double arrows indicate the far-field director <b>n</b><sub>0</sub>; the color on the particles represents the orientation of the surface director projected to the plane orthogonal to <b>n</b><sub>0</sub>; color scheme is shown as the inset of (<b>d</b>). (<b>f</b>,<b>g</b>) Snapshots of elasticity-mediated interactions between like- (<b>f</b>) and opposite- (<b>g</b>) handed microsprings, exhibiting attraction and repulsion, respectively, over the time of 10–100 s. Scale bars are 5 μm. (<b>h</b>,<b>i</b>) Numerically calculated Landau-de Gennes free energy vs. particle distances between like- (<b>h</b>) and opposite- (<b>i</b>) handed microsprings. When particles are far away from each other, the free energy scales as <span class="html-italic">d</span><sup>−3</sup> like that of dipole–dipole interactions. The distance between particles <span class="html-italic">d</span> is normalized by the particle radius <span class="html-italic">R</span>; the free energy <span class="html-italic">F</span><sub>LdG</sub> is normalized by the thermal energy <span class="html-italic">k</span><sub>B</sub><span class="html-italic">T</span>, where <span class="html-italic">k</span><sub>B</sub> is the Boltzmann constant and <span class="html-italic">T</span> is the room temperature. Adapted from Refs. [<a href="#B34-crystals-14-00885" class="html-bibr">34</a>,<a href="#B35-crystals-14-00885" class="html-bibr">35</a>].</p>
Full article ">Figure 3
<p>Topological colloids in LCs. (<b>a</b>,<b>b</b>) A square platelet with a central opening suspended in a LC. The image in (<b>a</b>) is taken with fluorescence confocal polarizing microscopy (FCPM); P<sub>FCPM</sub> indicates the polarization direction of the excitation light. The schematic in (<b>b</b>) shows the director distribution around the platelet. (<b>c</b>,<b>d</b>) Colloidal handlebodies of various genera in LCs. Panels in (<b>c</b>) are micrographs obtained by overlapping fluorescence images with orthogonal excitation polarizations as indicated by the green and magenta arrows; insects below are cross-sectional images in the <span class="html-italic">xz</span> plane taken along the yellow dashed lines. The schematics in (<b>d</b>) represent the director (black lines) distortions and topological defects (red and purple lines; purple dots) induced by the handlebodies; the total topological charge is determined by the particle genus <span class="html-italic">m</span><sub>c</sub> = 1 − <span class="html-italic">g</span>. (<b>e</b>–<b>g</b>) A large torus-shaped colloidal particle with homeotropic surface anchoring suspended in a nematic LC, inducing ½ and edge-pinned ¼ defect lines. The ¼ defect lines (blue lines in the schematic (<b>f</b>,<b>g</b>)) traverses along the edge of the particle and may jump between edges connected by ½ defect lines (red lines in (<b>f</b>,<b>g</b>)). The black arrows in the micrograph in (<b>e</b>) indicate the location of bulk ½ defect lines; tilting of the particle is indicated by the dashed line (rotation axis) and red curved arrow (tilt direction). Insets in (<b>e</b>) are obtained under crossed polarizers; polarization marked by white arrows. (<b>h</b>) Fractal colloidal particles with homeotropic surface anchoring in LCs. The first column is taken at elevated temperatures when the surrounding LC is in isotropic phase; the middle two columns are taken under crossed polarizers (white double arrows) and crossed polarizers with a retardation waveplate (yellow line). The last column is a computer-simulated director field distribution around such particles. Adapted from Refs. [<a href="#B31-crystals-14-00885" class="html-bibr">31</a>,<a href="#B37-crystals-14-00885" class="html-bibr">37</a>,<a href="#B39-crystals-14-00885" class="html-bibr">39</a>,<a href="#B40-crystals-14-00885" class="html-bibr">40</a>].</p>
Full article ">Figure 4
<p>Stimuli-responsive topological colloids. (<b>a</b>) Ring-shaped microparticles made with liquid crystal elastomers change shape upon temperature elevation over the nematic–isotropic phase transition point. The schematic in (<b>a</b>) shows the opening and closing of the central hole, effectively changing the topology of the particle. (<b>b</b>) Alignment and polarization-dependent extinction of plasmonic triangular nanoframes dispersed in a nematic LC. The schematic in (<b>a</b>) shows the director distortions caused by the triangular frame. The normal <b><span class="html-italic">ν</span></b> of the plane containing the nanoframe has the freedom to rotate in a cone shape with the far-field director <b>n</b><sub>0</sub> as the symmetry axis; <b>P</b> represents the polarization of the incident light. (<b>c</b>) Contraction of a ring-shaped particle with homeotropic anchoring and the induced disclination loops (indicated by red lines) in a nematic LC host. The far-field director is perpendicular to the viewing plane. Adapted from Refs. [<a href="#B41-crystals-14-00885" class="html-bibr">41</a>,<a href="#B42-crystals-14-00885" class="html-bibr">42</a>,<a href="#B43-crystals-14-00885" class="html-bibr">43</a>].</p>
Full article ">Figure 5
<p>Knotted (<b>a</b>–<b>e</b>) and linked (<b>f</b>–<b>l</b>) colloids in LCs. (<b>a</b>,<b>b</b>) Optical micrographs of a trefoil knot with planar (<b>a</b>) and homeotropic (<b>b</b>) surface anchoring suspended in LCs. Crossed white double arrows indicate the direction of the polarizer and the analyzer and the yellow double arrow indicates the direction of the slow axis of a 530 nm retardation plate. (<b>c</b>,<b>d</b>) Numerically simulated director field distributions on the surface of trefoil (<b>c</b>) and pentafoil (<b>d</b>) knots with planar surface anchoring. The color represents the orientation of the surface director projected to the plane orthogonal to <b>n</b><sub>0</sub>; the color scheme is shown as the inset of (<b>c</b>). The far-field director <b>n</b><sub>0</sub> is perpendicular to the sample plane as marked by the dot in a circle. (<b>e</b>) Schematic of defect lines (represented by green and magenta lines) induced by and entwined with a trefoil knot of homeotropic anchoring obtained from numerical simulation. (<b>f</b>) Polarizing, fluorescence, and simulated micrographs of linked colloidal rings with tangential surface anchoring suspended in LCs. The green and red double arrows represent the excitation polarization for fluorescence imaging. (<b>g</b>) Elastic interaction energy vs. deviation from the equilibrium center-to-center separation Δ<span class="html-italic">d</span> (black symbols) and orientation Δ<span class="html-italic">α</span> (green symbols) as defined in the inset. (<b>h</b>) Numerically simulated director field distributions on the surface of a Hopf link at the position in (<b>f</b>). The color represents the director orientation as defined in the inset. (<b>i</b>, <b>k</b>) Polarizing micrographs of Hopf (<b>i</b>) and Salomon (<b>k</b>) links with homeotropic surface anchoring suspended in LCs. (<b>j</b>,<b>l</b>) corresponding simulated director field distributions and defect field (represented by the red lines) induced by the links. The red arrow in (<b>j</b>) points at the location where the disclination line jumps from one colloidal loop to the other. The double arrows marked <b>n</b><sub>0</sub> represent the far-field director. Adapted from Refs. [<a href="#B46-crystals-14-00885" class="html-bibr">46</a>,<a href="#B49-crystals-14-00885" class="html-bibr">49</a>].</p>
Full article ">
30 pages, 23098 KiB  
Article
A Dataset of Visible Light and Thermal Infrared Images for Health Monitoring of Caged Laying Hens in Large-Scale Farming
by Weihong Ma, Xingmeng Wang, Xianglong Xue, Mingyu Li, Simon X. Yang, Yuhang Guo, Ronghua Gao, Lepeng Song and Qifeng Li
Sensors 2024, 24(19), 6385; https://doi.org/10.3390/s24196385 - 2 Oct 2024
Viewed by 1099
Abstract
Considering animal welfare, the free-range laying hen farming model is increasingly gaining attention. However, in some countries, large-scale farming still relies on the cage-rearing model, making the focus on the welfare of caged laying hens equally important. To evaluate the health status of [...] Read more.
Considering animal welfare, the free-range laying hen farming model is increasingly gaining attention. However, in some countries, large-scale farming still relies on the cage-rearing model, making the focus on the welfare of caged laying hens equally important. To evaluate the health status of caged laying hens, a dataset comprising visible light and thermal infrared images was established for analyses, including morphological, thermographic, comb, and behavioral assessments, enabling a comprehensive evaluation of the hens’ health, behavior, and population counts. To address the issue of insufficient data samples in the health detection process for individual and group hens, a dataset named BClayinghens was constructed containing 61,133 images of visible light and thermal infrared images. The BClayinghens dataset was completed using three types of devices: smartphones, visible light cameras, and infrared thermal cameras. All thermal infrared images correspond to visible light images and have achieved positional alignment through coordinate correction. Additionally, the visible light images were annotated with chicken head labels, obtaining 63,693 chicken head labels, which can be directly used for training deep learning models for chicken head object detection and combined with corresponding thermal infrared data to analyze the temperature of the chicken heads. To enable the constructed deep-learning object detection and recognition models to adapt to different breeding environments, various data enhancement methods such as rotation, shearing, color enhancement, and noise addition were used for image processing. The BClayinghens dataset is important for applying visible light images and corresponding thermal infrared images in the health detection, behavioral analysis, and counting of caged laying hens under large-scale farming. Full article
Show Figures

Figure 1

Figure 1
<p>Data collection device—autonomous poultry house inspection robot. 1, Lifting device of the chicken inspection robot; 2, information collection and control device; 3, walking device [<a href="#B12-sensors-24-06385" class="html-bibr">12</a>]; 4, monitoring and image capture device; 5, image capture device—infrared thermal camera; 6, image capture device—visible light camera; 7, visualization interface.</p>
Full article ">Figure 2
<p>Data collection process. Hardware selection: mobile phones, visible light cameras, infrared thermal imagers, and other peripheral hardware. Establishment of chicken inspection robot: assemble the chicken inspection robot and debug the image acquisition.</p>
Full article ">Figure 3
<p>Chicken inspection robot. (<b>a</b>) The overall structure of the dead chicken detection robot; (<b>b</b>) Image acquisition device. 1, Lifting device of the chicken inspection robot; 2, information collection and control system; 3, locomotion device [<a href="#B12-sensors-24-06385" class="html-bibr">12</a>]; 4, monitoring and image capture device; 5, image capture device—infrared thermal camera; 6, image capture device—visible light camera.</p>
Full article ">Figure 4
<p>Data collection process flowchart.</p>
Full article ">Figure 5
<p>Data augmentation processing.</p>
Full article ">Figure 6
<p>The dataset contains various morphological variations of individual laying hens.</p>
Full article ">Figure 7
<p>Diverse conditions of caged laying hens in the dataset.</p>
Full article ">Figure 8
<p>Variability in RGB-TIR image pairings presented in the dataset.</p>
Full article ">Figure 9
<p>Data annotation process.</p>
Full article ">Figure 10
<p>Dataset structure 1.</p>
Full article ">Figure 11
<p>Dataset structure 2.</p>
Full article ">Figure 12
<p>File count of BClayinghens dataset 1.</p>
Full article ">Figure 13
<p>File count of BClayinghens dataset 2.</p>
Full article ">Figure 14
<p>Chicken head detection dataset quality assessment. (x, y) represents the center coordinates of the chicken head target box; width is the width of the chicken head target box; height is the height of the chicken head target box.</p>
Full article ">Figure 15
<p>Loss and precision changes of RT-DETR and YOLOv5 chicken head detection models. (<b>a</b>) <span class="html-italic">giou_loss</span>; (<b>b</b>) <span class="html-italic">cls_loss</span>; (<b>c</b>) l1_<span class="html-italic">loss</span>; (<b>d</b>) <span class="html-italic">box_loss</span>; (<b>e</b>) <span class="html-italic">obj_loss</span>.</p>
Full article ">Figure 16
<p>Loss and precision changes of YOLOv8 and YOLOv9 chicken head detection models. (<b>a</b>) <span class="html-italic">box_loss</span>; (<b>b</b>) <span class="html-italic">cls_loss</span>; (<b>c</b>) <span class="html-italic">dfl_loss</span>; (<b>d</b>) <span class="html-italic">box_loss</span>; (<b>e</b>) <span class="html-italic">cls_loss</span>; (<b>f</b>) <span class="html-italic">dfl_loss</span>.</p>
Full article ">Figure 17
<p>Loss and precision changes of YOLOv10 chicken head detection model. (<b>a</b>) <span class="html-italic">box_om</span>; (<b>b</b>) <span class="html-italic">cls_om</span>; (<b>c</b>) <span class="html-italic">dfl_om</span>; (<b>d</b>) <span class="html-italic">box_oo</span>; (<b>e</b>) <span class="html-italic">cls_oo</span>; (<b>f</b>) <span class="html-italic">dfl_oo</span>.</p>
Full article ">Figure 18
<p>Model Recall and mAP50 change curves.</p>
Full article ">Figure 19
<p>Recognition effect of the RT-DETR object detection algorithm.</p>
Full article ">Figure 20
<p>Recognition effect of the YOLOv5 object detection algorithm.</p>
Full article ">Figure 21
<p>Recognition effect of the YOLOv8 object detection algorithm.</p>
Full article ">Figure 22
<p>Recognition effect of the YOLOv9 object detection algorithm.</p>
Full article ">Figure 23
<p>Recognition effect of the YOLOv10 object detection algorithm.</p>
Full article ">Figure 24
<p>Chicken head temperature recognition effect.</p>
Full article ">
21 pages, 5469 KiB  
Article
Θ-Net: A Deep Neural Network Architecture for the Resolution Enhancement of Phase-Modulated Optical Micrographs In Silico
by Shiraz S. Kaderuppan, Anurag Sharma, Muhammad Ramadan Saifuddin, Wai Leong Eugene Wong and Wai Lok Woo
Sensors 2024, 24(19), 6248; https://doi.org/10.3390/s24196248 - 26 Sep 2024
Cited by 1 | Viewed by 614
Abstract
Optical microscopy is widely regarded to be an indispensable tool in healthcare and manufacturing quality control processes, although its inability to resolve structures separated by a lateral distance under ~200 nm has culminated in the emergence of a new field named fluorescence nanoscopy [...] Read more.
Optical microscopy is widely regarded to be an indispensable tool in healthcare and manufacturing quality control processes, although its inability to resolve structures separated by a lateral distance under ~200 nm has culminated in the emergence of a new field named fluorescence nanoscopy, while this too is prone to several caveats (namely phototoxicity, interference caused by exogenous probes and cost). In this regard, we present a triplet string of concatenated O-Net (‘bead’) architectures (termed ‘Θ-Net’ in the present study) as a cost-efficient and non-invasive approach to enhancing the resolution of non-fluorescent phase-modulated optical microscopical images in silico. The quality of the afore-mentioned enhanced resolution (ER) images was compared with that obtained via other popular frameworks (such as ANNA-PALM, BSRGAN and 3D RCAN), with the Θ-Net-generated ER images depicting an increased level of detail (unlike previous DNNs). In addition, the use of cross-domain (transfer) learning to enhance the capabilities of models trained on differential interference contrast (DIC) datasets [where phasic variations are not as prominently manifested as amplitude/intensity differences in the individual pixels unlike phase-contrast microscopy (PCM)] has resulted in the Θ-Net-generated images closely approximating that of the expected (ground truth) images for both the DIC and PCM datasets. This thus demonstrates the viability of our current Θ-Net architecture in attaining highly resolved images under poor signal-to-noise ratios while eliminating the need for a priori PSF and OTF information, thereby potentially impacting several engineering fronts (particularly biomedical imaging and sensing, precision engineering and optical metrology). Full article
(This article belongs to the Special Issue Precision Optical Metrology and Smart Sensing)
Show Figures

Figure 1

Figure 1
<p>A generalized schematic (overview) of the Θ-Net architecture as proposed in the current context. Θ-Net adopts a ‘<span class="html-italic">string of beads</span>’ methodology of concatenating multiple O-Nets, thereby enhancing the DNN’s resilience to feature-based variations present in different samples that it might be trained with. Here (and as is presented in the current study), we employ a 3-node Θ-Net framework for model training and validation.</p>
Full article ">Figure 2
<p>The structure of the 5-layer (Panel <b>A</b>) and 7-layer (Panel <b>B</b>) O-Net architectures utilized as nodes for Θ-Net (Panel <b>C</b>), as described in the present study. Each of the 3 nodes of Θ-Net (shown in <a href="#sensors-24-06248-f001" class="html-fig">Figure 1</a> previously) is an O-Net model specifically trained with the input image dataset for the imaging modality which it is intended to be deployed for use in. The exception here refers to the 3rd node utilized in the current Θ-Net framework—the O-Net model here was trained using both the DIC and PCM image datasets, via a <span class="html-italic">transfer learning</span> approach. As with the traditional O-Net architecture (described in [<a href="#B29-sensors-24-06248" class="html-bibr">29</a>]), skip-connections (<span class="html-italic">concatenations</span>) are used to join layers in the encoder block (consisting of transposed convolution operations) with their corresponding conjugates in the decoder block (comprising convolution operations).</p>
Full article ">Figure 3
<p>The validation of <span class="html-italic">in silico</span> ER images obtained through various models, including those developed using O-Net [<a href="#B29-sensors-24-06248" class="html-bibr">29</a>] and the presently surfaced Θ-Net. The sample shown here consists of highly magnified views of skeletal muscle tissue (L.S.) (adapted from <a href="#app1-sensors-24-06248" class="html-app">Figure S11 of the Supplementary Materials</a>, with further evaluation images being presented in this figure for the interested reader as well). The O-Net model was trained over 101 epochs, while the Θ-Net model assimilated O-Net models trained over 160 epochs (for the 1st node), a 120-epoch-trained O-Net model (for the 2nd node) and a 160-epoch-trained and transfer-learnt O-Net model (for the 3rd node). Notice the closer resemblance of the Θ-Net-generated image to the <b>Expected</b> (ground truth) image, as compared to the O-Net models (highlighted within the blue ellipses). <b><span class="html-italic">*N.B.</span></b>: The <b>Source</b> image (input) was acquired via a 20X/0.40 Ph1 objective, while the <b>Expected</b> (ground truth) image was obtained using a 40X/0.60 Ph2 objective. Images generated through models founded on other frameworks (namely 3D RCAN [<a href="#B26-sensors-24-06248" class="html-bibr">26</a>], BSRGAN [<a href="#B38-sensors-24-06248" class="html-bibr">38</a>] and ANNA-PALM [<a href="#B23-sensors-24-06248" class="html-bibr">23</a>]) were also included for comparison purposes (the ANNA-PALM [<a href="#B23-sensors-24-06248" class="html-bibr">23</a>] model developed for increasing the resolution of grayscale photomicrographs of microtubules was utilized as an extension within ImageJ 1.52n (NIH, USA), while the 3D RCAN [<a href="#B26-sensors-24-06248" class="html-bibr">26</a>] model was trained over 250 epochs with 1972 steps per epoch and 2 residual groups). <b><span class="html-italic">*N.B.</span></b>: The Θ-Net models utilized for generating the ER images in this figure and <a href="#app1-sensors-24-06248" class="html-app">Figure S11</a> implement optional <span class="html-italic">node scaling</span> for each node, differing from the rest of this study. The supplied code (described in the accompanying <a href="#app1-sensors-24-06248" class="html-app">Supplementary Materials</a>) allows the user to select whether node scaling should be applied (or not), based on the user’s discernment of their image dataset and acquisition parameters.</p>
Full article ">Figure 4
<p><span class="html-italic">In silico</span> ER images obtained through several models, including those developed using U-Net [<a href="#B11-sensors-24-06248" class="html-bibr">11</a>], O-Net [<a href="#B29-sensors-24-06248" class="html-bibr">29</a>] and Θ-Net (similar O-Net and Θ-Net models as described in <a href="#sensors-24-06248-f003" class="html-fig">Figure 3</a> previously were used for the ER images here as well). As with <a href="#sensors-24-06248-f003" class="html-fig">Figure 3</a>, the <b>Source</b> image (input) was acquired via a 20X/0.40 Ph1 objective, while the <b>Expected</b> (ground truth) image was obtained using a 40X/0.60 Ph2 objective. *<b><span class="html-italic">N.B.</span></b>: This figure was sourced from <a href="#app1-sensors-24-06248" class="html-app">Figure S12 of the accompanying Supplementary Materials</a>. Notice that some features (such as the cell walls within the blue ellipses as shown here or the shadow-like striations within the green ellipses in <a href="#app1-sensors-24-06248" class="html-app">Figure S12</a>) are resolved differently in the Θ-Net models as compared to that from O-Net—in the Θ-Net images, the cell walls appear more granular around the periphery, while the striations are less visible. This might be postulated to be due to the Θ-Net architecture adopting a transfer-learnt model from PCM; hence, pseudo-relief artifacts characteristic of DIC imaging would be less pronounced (leading to a reduced accentuation of the striations), while phasic variations identified in the image are characterized pixel-wise (accounting for the granular edges of the cell wall when adopting Θ-Net models for ER images). <b><span class="html-italic">*N.B.</span></b>: The deployment of the Θ-Net model utilized for generating the ER images in this figure does <span class="html-italic">not</span> employ node scaling, differing from that used for <a href="#sensors-24-06248-f003" class="html-fig">Figure 3</a>.</p>
Full article ">Figure 5
<p>Loss functions for DIC micrographs imaged under the Θ-Net architecture. (<b>a</b>) Discriminator losses on <span class="html-italic">both</span> the real and generated samples, represented by dR (green line) and dG (blue line) losses, respectively, as well as (<b>b</b>) the generator (g) loss plotted as a red line. The spikes observed in the dR and dG loss function plots (<b>a</b>) demonstrate the commencement of a subsequent training run, implicating the selection of a new random seed for DNN training. Moreover, this approach of intermittently training the DNN models over multiple runs (rather than a single continuous run) is preferred [<a href="#B40-sensors-24-06248" class="html-bibr">40</a>] as it <span class="html-italic">restructures</span> the loss function error landscape, allowing the model to reach its global minimum (even if it may be trapped in one of the local minima during an earlier training run).</p>
Full article ">Figure 6
<p>A comparison of the images obtained through utilizing models developed on the O-Net and Θ-Net architectures using PCM micrographs. Here, the O-Net model was trained over 120 epochs, while the Θ-Net framework incorporated the said O-Net model (in its 1st node), a 120-epoch-trained O-Net model (for the 2nd node) and a 160-epoch-trained and transfer-learnt O-Net model (for the 3rd node), which is identical to that used for the DIC micrographs in <a href="#sensors-24-06248-f003" class="html-fig">Figure 3</a> previously. Also (as enunciated for the images in <a href="#sensors-24-06248-f003" class="html-fig">Figure 3</a>), the <b>Source</b> (input) image was acquired via a 20X/0.40 Ph1 objective, while the <b>Expected</b> (ground truth) image was obtained with a 40X/0.60 Ph2 objective). *<b><span class="html-italic">N.B.</span></b>: This figure was sourced from <a href="#app1-sensors-24-06248" class="html-app">Figure S13 of the accompanying Supplementary Materials</a>. As with the Θ-Net-generated DIC micrographs in <a href="#sensors-24-06248-f003" class="html-fig">Figure 3</a> previously, the region encircled within the blue ellipse of the Θ-Net-generated images depicts an enhanced level of detail (as compared to its O-Net counterpart), supporting the proposed Θ-Net framework as a viable improvement over O-Net (for producing computational models to facilitate <span class="html-italic">in silico</span> ER). Here too (as with <a href="#sensors-24-06248-f003" class="html-fig">Figure 3</a> previously), the ANNA-PALM [<a href="#B23-sensors-24-06248" class="html-bibr">23</a>] model was utilized as an ImageJ extension for microtubule SR, while the 3D RCAN [<a href="#B26-sensors-24-06248" class="html-bibr">26</a>] model was trained over 250 epochs with 1972 steps per epoch and 2 residual groups. Contrast enhancement (in MS PowerPoint) was also applied to the Θ-Net-derived image, making it easier to discern the features.</p>
Full article ">Figure 7
<p>ER images obtained through several models, including those developed using O-Net [<a href="#B29-sensors-24-06248" class="html-bibr">29</a>] and Θ-Net. The O-Net model was trained over 101 epochs, while the Θ-Net model assimilated O-Net models trained over 120 epochs (for the 1st node), a 120-epoch-trained O-Net model (for the 2nd node) and a 160-epoch-trained and transfer-learnt O-Net model (for the 3rd node). Notice the increased resolution evident in the Θ-Net-generated image, which seemingly surpasses that generated via O-Net and even the <b>Expected</b> (ground truth) images, as highlighted within the blue ellipses. *<b><span class="html-italic">N.B.</span></b>: The <b>Source</b> image (input) was acquired via a 20X/0.40 Ph1 objective, while the <b>GT</b> (ground truth) image was obtained using a 40X/0.60 Ph2 objective. Images generated through models founded on other frameworks (namely 3D RCAN [<a href="#B26-sensors-24-06248" class="html-bibr">26</a>], BSRGAN [<a href="#B38-sensors-24-06248" class="html-bibr">38</a>] and ANNA-PALM [<a href="#B23-sensors-24-06248" class="html-bibr">23</a>]) were also included for comparison purposes (the ANNA-PALM [<a href="#B23-sensors-24-06248" class="html-bibr">23</a>] model developed for increasing the resolution of grayscale photomicrographs of microtubules was utilized as an extension within ImageJ 1.52n (NIH, USA), while the 3D RCAN [<a href="#B26-sensors-24-06248" class="html-bibr">26</a>] model was trained over 250 epochs with 1972 steps per epoch and 2 residual groups). Brightness enhancement (in MS PowerPoint) was applied to the Θ-Net-derived image, facilitating the discernment of the features. This figure was adapted from <a href="#app1-sensors-24-06248" class="html-app">Figure S14 of the Supplementary Materials</a>; further evaluation images are presented in <a href="#app1-sensors-24-06248" class="html-app">Figure S14</a> for the interested reader).</p>
Full article ">Figure 8
<p>Loss function plots for Θ-Net models trained on PCM micrographs. (<b>a</b>) denotes the discriminator losses for real (dR) and generated (dG) samples, indicated by green and blue lines, respectively, while (<b>b</b>) represents the generator (g) loss (as red lines). As with <a href="#sensors-24-06248-f005" class="html-fig">Figure 5</a> previously, spikes present in the plots of the dR and dG loss functions indicate the start of a new training run (using the existing Python script), potentially implying the selection of a new random seed at the start of each run (model training for this dataset was also conducted intermittently).</p>
Full article ">Figure 9
<p>Sample O-Net- and Θ-Net-generated images from (<b>A</b>) DIC and (<b>B</b>) PCM micrographs. For this figure, a random sample was selected and assayed, as a means of exercising stringency when evaluating potential mismatches between Θ-Net and ground truth (<b>Expected</b>) images. Here, we observe that the local SSIM maps for both O-Net and Θ-Net appear to be generally white, which is indicative of a high correlation between the respective DNN-generated images and the ground truth (<b>Expected</b>) images [the local SSIM maps are based on individual pixel mismatches within an 11-by-11 neighborhood [<a href="#B42-sensors-24-06248" class="html-bibr">42</a>] and range from black (0) to white (1)]. From the table, it may be observed that Θ-Net generally surpasses O-Net when increasing the resolution of DIC micrographs, while the reverse holds true for PCM images (at least for the assayed images in this instance). Nonetheless (for the PCM images), the image metrics (such as PSNR and SSIM) for Θ-Net seem to closely approach those of O-Net (differing by &lt;1% for SSIM scores but a slightly higher margin of &lt;9% for PSNR scores), implying that Θ-Net models might be more susceptible to <span class="html-italic">local</span> variations in individual pixel values and seeking to convert these into discernible features, thereby resulting in an imposed ‘penalty’ and reduced SSIM scores (as these metrics may interpret these features as noise). Further evidence of this is provided through the IMSE metric, which clearly shows a marked increase for the Θ-Net-generated images (when compared with those from O-Net) for the PCM images (Panel <b>B</b>).</p>
Full article ">Figure 10
<p>DIC and PCM photomicrographs infused with salt-and-pepper noise (noise density: 20%), which are labeled <b>Source + Noise</b>, and their corresponding ‘denoised’ images, as well as the ground truth (<b>Expected</b>) images utilized here. Here, we notice that Θ-Net models are relatively resilient to these noisy pixels, despite not being trained specifically to denoise images. This deduction is further corroborated by comparing these images against the Θ-Net-generated images in the absence of input noise (depicted within the violet ellipses). Nonetheless, it would be prudent to mention at this juncture that the Θ-Net models are still somewhat influenced by the noisy pixels in the image; hence, it would be preferred if the image denoising was performed <span class="html-italic">separately</span> prior to inputting the denoised images into Θ-Net for subsequent <span class="html-italic">in silico</span> ER.</p>
Full article ">
20 pages, 13452 KiB  
Article
Cadastral-to-Agricultural: A Study on the Feasibility of Using Cadastral Parcels for Agricultural Land Parcel Delineation
by Han Sae Kim, Hunsoo Song and Jinha Jung
Remote Sens. 2024, 16(19), 3568; https://doi.org/10.3390/rs16193568 - 25 Sep 2024
Viewed by 598
Abstract
Agricultural land parcels (ALPs) are essential for effective agricultural management, influencing activities ranging from crop yield estimation to policy development. However, traditional methods of ALP delineation are often labor-intensive and require frequent updates due to the dynamic nature of agricultural practices. Additionally, the [...] Read more.
Agricultural land parcels (ALPs) are essential for effective agricultural management, influencing activities ranging from crop yield estimation to policy development. However, traditional methods of ALP delineation are often labor-intensive and require frequent updates due to the dynamic nature of agricultural practices. Additionally, the significant variations across different regions and the seasonality of agriculture pose challenges to the automatic generation of accurate and timely ALP labels for extensive areas. This study introduces the cadastral-to-agricultural (Cad2Ag) framework, a novel approach that utilizes cadastral data as training labels to train deep learning models for the delineation of ALPs. Cadastral parcels, which are relatively widely available and stable elements in land management, serve as proxies for ALP delineation. Employing an adapted U-Net model, the framework automates the segmentation process using remote sensing images and geographic information system (GIS) data. This research evaluates the effectiveness of the proposed Cad2Ag framework in two U.S. regions—Indiana and California—characterized by diverse agricultural conditions. Through rigorous evaluation across multiple scenarios, the study explores diverse scenarios to enhance the accuracy and efficiency of ALP delineation. Notably, the framework demonstrates effective ALP delineation across different geographic contexts through transfer learning when supplemented with a small set of clean labels, achieving an F1-score of 0.80 and an Intersection over Union (IoU) of 0.67 using only 200 clean label samples. The Cad2Ag framework’s ability to leverage automatically generated, extensive, free training labels presents a promising solution for efficient ALP delineation, thereby facilitating effective management of agricultural land. Full article
Show Figures

Figure 1

Figure 1
<p>Our study area for the training dataset generation and the transfer learning experiment. Seventeen counties in Indiana (indicated in green) were included to generate the training datasets, and transfer learning was conducted in Fresno County, California (indicated in orange).</p>
Full article ">Figure 2
<p>Spectral signatures and temporal behavior over the areas in both Indiana and California.</p>
Full article ">Figure 3
<p>General workflow of the study.</p>
Full article ">Figure 4
<p>The Unet-32 architecture used in the experiments. <span class="html-italic">n</span>, <span class="html-italic">m</span>, <span class="html-italic">b</span>, <span class="html-italic">f</span>, and <span class="html-italic">c</span> refer to the number of pixels, the number of incorporated multi-temporal images, the number of bands, the number of filters, and the number of classes.</p>
Full article ">Figure 5
<p>Samples of generated labels with corresponding RGB images for ‘Single temporal’ and ‘Multi-temporal’ datasets. In these images, black represents the background class, white the parcel class, red the buffer class, and green the road class. The ‘Multi-temporal’ figure highlights the variations in the RGB image over time.</p>
Full article ">Figure 6
<p>Segmentation results for the IN-I dataset. Errors are shown in the last row. White, black, red, and blue indicate TP, TN, FP, and FN, respectively.</p>
Full article ">Figure 7
<p>Box plot of IoU for each class with different training datasets (<b>left</b>: single-temporal, <b>right</b>: multi-temporal).</p>
Full article ">Figure 8
<p>Box plot of the <math display="inline"><semantics> <msub> <mi>F</mi> <mn>1</mn> </msub> </semantics></math> score and IoU using different sample sizes for transfer learning. The zero-training-sample case leverages only the pre-trained model from the IN-VI dataset.</p>
Full article ">Figure 9
<p>Segmentation results using different sample sizes on California dataset (CA-I).</p>
Full article ">
22 pages, 6892 KiB  
Review
Review on Photoacoustic Monitoring after Drug Delivery: From Label-Free Biomarkers to Pharmacokinetics Agents
by Jiwoong Kim, Seongwook Choi, Chulhong Kim, Jeesu Kim and Byullee Park
Pharmaceutics 2024, 16(10), 1240; https://doi.org/10.3390/pharmaceutics16101240 - 24 Sep 2024
Viewed by 781
Abstract
Photoacoustic imaging (PAI) is an emerging noninvasive and label-free method for capturing the vasculature, hemodynamics, and physiological responses following drug delivery. PAI combines the advantages of optical and acoustic imaging to provide high-resolution images with multiparametric information. In recent decades, PAI’s abilities have [...] Read more.
Photoacoustic imaging (PAI) is an emerging noninvasive and label-free method for capturing the vasculature, hemodynamics, and physiological responses following drug delivery. PAI combines the advantages of optical and acoustic imaging to provide high-resolution images with multiparametric information. In recent decades, PAI’s abilities have been used to determine reactivity after the administration of various drugs. This study investigates photoacoustic imaging as a label-free method of monitoring drug delivery responses by observing changes in the vascular system and oxygen saturation levels across various biological tissues. In addition, we discuss photoacoustic studies that monitor the biodistribution and pharmacokinetics of exogenous contrast agents, offering contrast-enhanced imaging of diseased regions. Finally, we demonstrate the crucial role of photoacoustic imaging in understanding drug delivery mechanisms and treatment processes. Full article
(This article belongs to the Special Issue Advanced Materials Science and Technology in Drug Delivery)
Show Figures

Figure 1

Figure 1
<p>Summary of photoacoustic (PA) monitoring after drug delivery, representing label-free PA monitoring and exogenous agent-based monitoring. In this review, we first discuss studies that observed changes in the vascular system and individual vessel responses to drug delivery (top left). Then, we highlight label-free PA treatment monitoring and brain hemodynamics, including oxygen saturation mapping and associated hemodynamic changes post-drug administration (top right). Finally, we review exogenous agent-based PA monitoring, categorizing the agents used as follows: small-molecule dye, polymer-based nanoparticles, and metallic nanoparticles (Bottom). SO<sub>2</sub>, oxygen saturation. The images are adapted with permission from [<a href="#B80-pharmaceutics-16-01240" class="html-bibr">80</a>,<a href="#B81-pharmaceutics-16-01240" class="html-bibr">81</a>,<a href="#B82-pharmaceutics-16-01240" class="html-bibr">82</a>,<a href="#B83-pharmaceutics-16-01240" class="html-bibr">83</a>,<a href="#B84-pharmaceutics-16-01240" class="html-bibr">84</a>,<a href="#B85-pharmaceutics-16-01240" class="html-bibr">85</a>,<a href="#B86-pharmaceutics-16-01240" class="html-bibr">86</a>]. Copyright 2023 American Chemical Society.</p>
Full article ">Figure 2
<p>(<b>a</b>) PA images of the vasoconstriction after the subcutaneous injection and topical application of hydrocortisone, and quantified vascular changes in the papillary dermis and reticular dermis. (<b>b</b>) PA maximum amplitude projection (MAP) images and diameter-mapped images after the injection of different glucose concentrations at 20-min (#7), 60-min (#15), and 135-min (#30) post-injection. (<b>c</b>) Volume images of the iliac artery after injection of sildenafil and internal thoracic artery after injection of G-1 (<b>left</b>), and images of uterine artery and fetal vasculature after injection of vasodilators (<b>right</b>). HC, hydrocortisone; NS, nonsteroidal; sub., subcutaneous injection; top., topical application; ED, epidermis; PA, photoacoustic; MAP, maximum amplitude projection; UM, umbilical cord; SP, spiral artery; and FS, fetal side. The images are adapted with permission from [<a href="#B81-pharmaceutics-16-01240" class="html-bibr">81</a>,<a href="#B105-pharmaceutics-16-01240" class="html-bibr">105</a>,<a href="#B106-pharmaceutics-16-01240" class="html-bibr">106</a>].</p>
Full article ">Figure 3
<p>(<b>a</b>) Oxygen saturation images of a mouse brain in response to sodium nitroprusside. The images visualize individual vessels showing vasoconstriction (white arrows) or vasodilation (yellow arrows). The white box indicates a magnified region, which is enlarged for the same area at all time points. Scale bar, 1mm (top) and 100 μm (bottom). (<b>b</b>) Quantitative vessel diameter, area, and oxygen saturation changes after SNP injection (gray bar). Data are shown by mean ± standard error of the mean. (<b>c</b>) PA images of the first 60 min after WST11 administration to induce VTP. White arrows indicate that the vessel disrupts after VTP, and white dashed circles show regions in which sO<sub>2</sub> levels decreased. sO<sub>2</sub>, oxygen saturation; VA, vessel area; SNP, sodium nitroprusside; VTP, vascular-targeted photodynamic therapy. The images are adapted with permission from [<a href="#B82-pharmaceutics-16-01240" class="html-bibr">82</a>,<a href="#B83-pharmaceutics-16-01240" class="html-bibr">83</a>].</p>
Full article ">Figure 4
<p>(<b>a</b>) Ultrafast functional PAM images of a placenta at baseline, 2 min, and 12 min after alcohol consumption. Scale bar, 1 mm. (<b>b</b>) Changes in vessel density and oxygen saturation in the placenta after alcohol consumption (N = 3). The gray bars indicate the start of alcohol administrations (100 s). Data are shown by mean ± standard error of the mean. (<b>c</b>) Changes in the oxygen saturation level of the dermis and hypodermis after adrenaline injection in the human forearm. * <span class="html-italic">p</span> &lt; 0.05; sO<sub>2</sub>, oxygen saturation. The images are adapted with permission from [<a href="#B84-pharmaceutics-16-01240" class="html-bibr">84</a>,<a href="#B117-pharmaceutics-16-01240" class="html-bibr">117</a>].</p>
Full article ">Figure 5
<p>(<b>a</b>) PA MAP images of a whole-body rat before and after ICG injection and 3D time-to-peak images of ICG perfusion in the rat kidney. (<b>b</b>) Absorbance of ICG, ICG-JA, and JAAZ particles at 50 µM. (<b>c</b>) Schematic of the functionalization of JAAZ particles coating with biomolecules. (<b>d</b>) PA MAP images of mice after ICG and RGD-JAAZ injection and PA unmixed images of HbO, HbR, and RGD-JAAZ. Red colormap shows HbO, blue shows HbR, green shows RGD-JAAZ, and gray shows PA signal. (<b>e</b>) US and PA unmixed images of HbO, HbR, and AF2 and comparison of PA signals between the control and TBI models (top). The bar graphs shows statistical (N = 3) PA amplitudes of HbO, HbR, and AF2 after spectral unmixing. PA, photoacoustic; MAP, maximum amplitude projection; ICG, indocyanine green; JAs, J-aggregates; JAAZs, azide-modified ICG J-aggregates; US, ultrasound; TBI, traumatic brain injury; HbO, oxy-hemoglobin; HbR, deoxy-hemoglobin; and AF, aminofluorene. The images are adapted with permission from [<a href="#B80-pharmaceutics-16-01240" class="html-bibr">80</a>,<a href="#B122-pharmaceutics-16-01240" class="html-bibr">122</a>,<a href="#B123-pharmaceutics-16-01240" class="html-bibr">123</a>].</p>
Full article ">Figure 6
<p>(<b>a</b>) Schematic of gas-generating gold nanorod modified with azide on silica coating (<b>left</b>). GLANCE’s US signal peaked at 815 nm, matching its SPR peak, while the gold nanorods showed no signal change with varying laser wavelengths (<b>center</b>). US and PA images of the tumor corresponding to the laser irradiation (<b>right</b>). Green color mapping shows the difference in US intensity between right after and before laser irradiation, and red colormap represents the difference in PA intensity. (<b>b</b>) Comparison of PA images of MDA-MB-231 cells between FeS<sub>2</sub>-PEG8 and PBS 24 h uptake (<b>left</b>). PA images of xenograft mice bearing MDA-MB-231 breast tumors 1 h after FeS<sub>2</sub>-PEG8 injection (<b>right</b>). (<b>c</b>) PA and US images of the thrombotic artery after PBS, B@SP NPs, or B@SP-C NPs. Yellow dashed circles represents the thrombotic artery areas. The corresponding PA signals of the thrombotic artery (n = 5). US, ultrasound; SPR, surface plasmon resonance; PA, photoacoustic; NP, nanoparticle. The images are adapted with permission from [<a href="#B85-pharmaceutics-16-01240" class="html-bibr">85</a>,<a href="#B86-pharmaceutics-16-01240" class="html-bibr">86</a>,<a href="#B127-pharmaceutics-16-01240" class="html-bibr">127</a>]. Copyright 2023 and 2024 American Chemical Society.</p>
Full article ">
13 pages, 4759 KiB  
Article
White Light Diffraction Phase Microscopy in Imaging of Breast and Colon Tissues
by Adriana Smarandache, Ruxandra A. Pirvulescu, Ionut-Relu Andrei, Andra Dinache, Mihaela Oana Romanitan, Daniel Constantin Branisteanu, Mihail Zemba, Nicoleta Anton, Mihail-Lucian Pascu and Viorel Nastasa
Diagnostics 2024, 14(17), 1966; https://doi.org/10.3390/diagnostics14171966 - 6 Sep 2024
Viewed by 652
Abstract
This paper reports results obtained using white light diffraction phase microscopy (wDPM) on captured images of breast and colon tissue samples, marking a contribution to the advancement in biomedical imaging. Unlike conventional brightfield microscopy, wDPM offers the capability to capture intricate details of [...] Read more.
This paper reports results obtained using white light diffraction phase microscopy (wDPM) on captured images of breast and colon tissue samples, marking a contribution to the advancement in biomedical imaging. Unlike conventional brightfield microscopy, wDPM offers the capability to capture intricate details of biological specimens with enhanced clarity and precision. It combines high resolution, enhanced contrast, and quantitative capabilities with non-invasive, label-free imaging. These features make it a useful tool for tissue imaging, providing detailed and accurate insights into tissue structure and dynamics without compromising the integrity of the samples. Our findings underscore the potential of quantitative phase imaging in histopathology, in the context of automating the process of tissue analysis and diagnosis. Of particular note are the insights gained from the reconstructed phase images, which provide physical data regarding peripheral glandular cell membranes. These observations serve to focus attention on pathologies involving the basal membrane, such as early invasive carcinoma. Through our analysis, we aim to contribute to catalyzing further advancements in tissue (breast and colon) imaging. Full article
Show Figures

Figure 1

Figure 1
<p>The phase reconstruction procedure. (<b>a</b>) The raw image of an unstained breast specimen obtained because of the interference between the first and zero diffraction orders. The enlarged area in red shows the interference fringes. (<b>b</b>) Fourier transform of the raw image. (<b>c</b>) Quantitative phase map reconstructed after applying the inverse Fourier transform and extracting the background. Color bar represents the optical pathlength values expressed in nanometers.</p>
Full article ">Figure 2
<p>Unstained breast tissue characterized by typical ductal hyperplasia showing a mammary duct cross-section. (<b>a</b>) BF image taken with the inverted microscope (20×/0.4 NA objective); (<b>b</b>) phase map attained using the wDPM system with the OPL profile plot at the line across the selected region of the basal membrane shown in detail. The color bar represents the OPL through the specimen, in nm (many pixels are saturated at 102 nm in order to enhance details in the ROI); the arrows indicate the ductal lumen, the ductal basal membrane, and a fibrocyte cell.</p>
Full article ">Figure 3
<p>Unstained breast tissue characterized by typical ductal hyperplasia showing mammary ducts in longitudinal section. (<b>a</b>) BF image taken with the inverted microscope (20×/0.4 NA objective); (<b>b</b>) phase map attained using the wDPM system, where color bar represents the OPL through the specimens, in nm (pixels are saturated at 78 nm in order to enhance details in the ROI); the arrows indicate the collagen stroma, an endothelial cell, and RBCs. Histograms of the collected signal (gray values) on the collagen stroma area are represented for images taken with the inverted microscope (<b>c</b>) and wDPM setup (<b>d</b>), respectively.</p>
Full article ">Figure 4
<p>Normal colon tissue—H&amp;E staining. (<b>a</b>,<b>b</b>) BF images taken with the inverted microscope (20×/0.4 NA objective); (<b>c</b>,<b>d</b>) phase maps attained using wDPM system, where color bar represents the OPL through the specimens, in nm (pixels are saturated at 105 nm in (<b>c</b>), and at 120 nm in (<b>d</b>) in order to enhance details in the ROIs).</p>
Full article ">Figure 5
<p>Malignant colon tissue—H&amp;E staining. (<b>a</b>) BF image taken with the inverted microscope (20×/0.4 NA objective); (<b>b</b>) phase map attained using the wDPM system, where color bar represents the optical pathlength through the specimen, in nm (pixels are saturated at 73 nm in order to enhance details in the ROI). Histograms of the collected signal (gray values) in the selected area are represented for images taken with the inverted microscope (<b>c</b>) and wDPM setup (<b>d</b>), respectively.</p>
Full article ">
10 pages, 2607 KiB  
Communication
Optical Interferometric Device for Rapid and Specific Detection of Biological Cells
by Sándor Valkai, Dániel Petrovszki, Zsombor Fáskerti, Margaréta Baumgärtner, Brigitta Biczók, Kira Dakos, Kevin Dósa, Berill B. Kirner, Anna E. Kocsis, Krisztina Nagy, István Andó and András Dér
Biosensors 2024, 14(9), 421; https://doi.org/10.3390/bios14090421 - 29 Aug 2024
Viewed by 4264
Abstract
Here, we report a rapid and accurate optical method for detecting cells from liquid samples in a label-free manner. The working principle of the method is based on the interference of parts of a conical laser beam, coming from a single-mode optical fiber [...] Read more.
Here, we report a rapid and accurate optical method for detecting cells from liquid samples in a label-free manner. The working principle of the method is based on the interference of parts of a conical laser beam, coming from a single-mode optical fiber directly, and reflected from a flat glass surface. The glass is functionalized by antibodies against the cells to be detected from the liquid sample. Cells bound to that surface modify the reflected beam, and hence, change the resulting interference pattern, too. By registering and interpreting the variation in the image, the presence of cells from the sample can be detected. As for a demonstration, cell suspensions from a U937 cell line were used in glass chambers functionalized by antibodies (TMG6-5 (mIgG1)) to which the cells specifically bind. The limit of detection (LOD) of the method was also estimated. This proof-of-concept setup offers a cost-effective and easy-to-use way of rapid and specific detection of any type of cells (including pathogens) from suspensions (e.g., body fluids). The possible portability of the device predicts its applicability as a rapid test in clinical diagnostics. Full article
(This article belongs to the Special Issue Feature Paper in Biosensor and Bioelectronic Devices 2024)
Show Figures

Figure 1

Figure 1
<p>Schematic 3D figure of the device. The side blocks (black) and the glass plates on top and the bottom (gray) form the sample chamber. Thanks to the leveled end faces and edges of the blocks and glass plates the sample liquid (light blue) has a flat and vertical surface, which is at the same time the output optical window. The laser light enters the sample chamber from a single-mode optical fiber (dark blue). The figure is not to scale.</p>
Full article ">Figure 2
<p>Schematic representation of the working principle: (<b>a</b>) Side view of the light path. The light red region shows the laser beam originated from the single-mode optical fiber, forming a coaxial light cone, and most of it (darker red) is reflected from the bottom surface of the sample-holding space (light blue), i.e., the interface functionalized by the analyte cells (green dots). In the overlapping area of the two parts, there occurs an interference that can be visualized/recorded by a screen (or a sensor of an imaging device). In order to make it easier to see the concept, the drawing in the figure is not to scale (actually the diameter of the optical fiber is 125 µm, while its distance from the end of the glass is about 4600 µm). (<b>b</b>) A 3D representation of the conical light beam. The color code is the same as used in (<b>a</b>). The lower part of the direct beam hits the bottom surface of the sample chamber in a parabolic region (shown in lighter red) and is reflected from it. Eventually, the direct and the reflected parts of the beam are stopped by the screen, where the interference is detected. The local change of reflectance in the elliptic region can be monitored as a variation in the interference stripes on the surface of the screen/detector. (The thin black line in the figure represents the edge of the substrate).</p>
Full article ">Figure 3
<p>The optical path difference for a reflected and a direct ray.</p>
Full article ">Figure 4
<p>Schematic representation of the evaluation procedure. In the figure, “1E4” refers to 10<sup>4</sup> cells/mL concentration used in the experiment, selected as an example to demonstrate the process (for details for all of the concentrations see <a href="#biosensors-14-00421-f005" class="html-fig">Figure 5</a>, <a href="#biosensors-14-00421-f006" class="html-fig">Figure 6</a> and <a href="#app1-biosensors-14-00421" class="html-app">Figure S2 in the Supporting Information</a>). Interference pattern of the reference (functionalized surface in pure PBS, without cells) labeled as “Reference” in the leftmost insert. Interference pattern recorded after the completed measuring cycle (after the cells bound to the surface and the chamber was flushed 3X with PBS) is in the rightmost insert. After a simple image processing procedure (see <a href="#app1-biosensors-14-00421" class="html-app">Figure S2, Supporting Information</a>), represented by the middle panel, Fourier spectra were calculated for quantification of the effect.</p>
Full article ">Figure 5
<p>Grayscale images of the interference patterns recorded at various cell concentrations from 10<sup>3</sup> to 10<sup>6</sup> cells/mL, set in quasi-exponential-scale steps, and for the cell-free reference. (The resolution of the original images is 2100 × 750 pixels).</p>
Full article ">Figure 6
<p>Cell-concentration dependence of the size of the effect, and a logarithmic fit to the 7 measured points (appearing as a straight line in the semi-logarithmic representation). The horizontal and vertical error bars represent a pessimistic error estimate based on the accuracy of the cell concentrations (for details, see <a href="#sec2-biosensors-14-00421" class="html-sec">Section 2</a>), and 3 successive measurements per concentration, respectively.</p>
Full article ">
Back to TopTop