Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,421)

Search Parameters:
Keywords = digital camera

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 5987 KiB  
Article
Towards Mineralogy 4.0? Atlas of 3D Rocks and Minerals: Digitally Archiving Interactive and Immersive 3D Data for Rocks and Minerals
by Andrei Ionuţ Apopei
Minerals 2024, 14(12), 1196; https://doi.org/10.3390/min14121196 - 24 Nov 2024
Viewed by 356
Abstract
Mineralogy 4.0 can play a significant role in the future of geological research, education, and exploration by providing a more comprehensive and interactive understanding of rocks and minerals. This paper explores the application of digital photogrammetry and augmented reality (AR) technologies as part [...] Read more.
Mineralogy 4.0 can play a significant role in the future of geological research, education, and exploration by providing a more comprehensive and interactive understanding of rocks and minerals. This paper explores the application of digital photogrammetry and augmented reality (AR) technologies as part of Mineralogy 4.0. An atlas of 3D rocks and minerals with 915 high-quality models was created to showcase the potential of photogrammetry in the mineral sciences. The repository contains a wide range of sample types, featuring transparency, metallic luster, fluorescence, or millimetric-scale crystals. The three-dimensional rocks and minerals can also be accessed on-the-go through a mobile application that was developed for Android devices. Additionally, web applications have been developed with specific three-dimensional collections as well as three-dimensional storytelling. AR technology was also integrated into the 3D repository, allowing users to superimpose virtual 3D models of rocks and minerals onto real-world surfaces through their device’s camera. Also, a digital solution with 3D holograms of rocks and minerals was effectively implemented to provide an interactive and immersive experience. The 3D datasets of rocks and minerals can play a significant role in the geoscience community’s research, developing not only in-depth knowledge of specimens but also opening new frontiers in mineral sciences, leading towards a more advanced era of mineralogy. Full article
(This article belongs to the Special Issue Geomaterials and Cultural Heritage)
18 pages, 10654 KiB  
Article
Experimental Study on Variable Amplitude Fatigue Performance of High-Strength Bolts in Steel Structure Flange Connections
by Huaguang Ni, Shujia Zhang and Honggang Lei
Buildings 2024, 14(12), 3736; https://doi.org/10.3390/buildings14123736 - 24 Nov 2024
Viewed by 258
Abstract
Steel structure flange connections are extensively employed in structural nodes due to their superior mechanical properties. This study combines fatigue testing and theoretical methods to investigate the fatigue performance of high-strength bolts in flange connections under actual gradient descent loads and provide fatigue [...] Read more.
Steel structure flange connections are extensively employed in structural nodes due to their superior mechanical properties. This study combines fatigue testing and theoretical methods to investigate the fatigue performance of high-strength bolts in flange connections under actual gradient descent loads and provide fatigue design methods. Initially, fatigue tests were conducted on two sets of high-strength bolts under a gradient descent loading mode, yielding a total of 11 sets of fatigue data. Subsequently, the stress–life (S-N) curve was plotted using a cumulative damage model combined with an equivalent constant amplitude stress method, and the results were compared with existing fatigue design specifications. Additionally, digital cameras and electron microscopes were utilized to capture fatigue fracture images of the high-strength bolts, allowing a detailed investigation into the mechanisms underlying bolt fatigue fractures. The results indicate that the allowable stress amplitudes for the two sets of high-strength bolts, corresponding to a fatigue life threshold of 2 million cycles, were 144.211 MPa and 130.316 MPa, respectively—both of which exceed the values specified in current fatigue design codes. Moreover, finite element simulations revealed that the most pronounced stress concentration occurs at the first thread where the bolt and nut interface, which is identified as the critical location for fatigue fracture in bolts. The allowable stress and fatigue calculation method of bolts obtained in this study will provide a reference for flange node design Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

Figure 1
<p>High-strength bolts with large hexagonal heads for fatigue testing.</p>
Full article ">Figure 2
<p>Schematic diagram of high-strength bolt size (mm): (<b>a</b>) M12 high-strength bolt; (<b>b</b>) M16 high-strength bolt.</p>
Full article ">Figure 3
<p>Steel pipe flange T-shaped connector.</p>
Full article ">Figure 4
<p>Schematic diagram of a T-shaped connector (mm): (<b>a</b>) Matching M12 high-strength bolt; (<b>b</b>) Matching M16 high-strength bolt.</p>
Full article ">Figure 5
<p>MTS fatigue testing device: (<b>a</b>) Summary diagram of fatigue test device; (<b>b</b>) Installation location diagram for fatigue test.</p>
Full article ">Figure 6
<p>Schematic diagram of gradient descent load loading method.</p>
Full article ">Figure 7
<p>Sinusoidal loading mode of load in fatigue test.</p>
Full article ">Figure 8
<p>Variable amplitude fatigue loading results of M12 bolts: (<b>a</b>) M12-1; (<b>b</b>) M12-2; (<b>c</b>) M12-3; (<b>d</b>) M12-4; (<b>e</b>) M12-5; (<b>f</b>) M12-6.</p>
Full article ">Figure 9
<p>Variable amplitude fatigue loading results of M16 bolts: (<b>a</b>) M16-1; (<b>b</b>) M16-2; (<b>c</b>) M16-3; (<b>d</b>) M16-4; (<b>e</b>) M16-5.</p>
Full article ">Figure 10
<p>High-strength bolt variable amplitude fatigue S-N curve: (<b>a</b>) M12 bolt specimen; (<b>b</b>) M16 bolt specimen.</p>
Full article ">Figure 11
<p>S-N curves for constant amplitude and variable amplitude stress in bolts specimen: (<b>a</b>) M12 bolt specimen; (<b>b</b>) M16 bolt specimen.</p>
Full article ">Figure 12
<p>Fatigue life of existing high-strength bolts with varying amplitudes.</p>
Full article ">Figure 13
<p>Abaqus finite element model: (<b>a</b>) High-strength bolts; (<b>b</b>) Nuts; (<b>c</b>) Steel pipe flange T-shaped connector; (<b>d</b>) Overall assembly diagram.</p>
Full article ">Figure 14
<p>Schematic diagram of stress concentration areas in high-strength bolts.</p>
Full article ">Figure 15
<p>Macroscopic fatigue fracture image: (<b>a</b>) M12-1; (<b>b</b>) M16-1.</p>
Full article ">Figure 16
<p>Microscopic image of fatigue source: (<b>a</b>) M12-1; (<b>b</b>) M16-1.</p>
Full article ">Figure 17
<p>Microscopic image of the fatigue crack propagation zone: (<b>a</b>) M12-1; (<b>b</b>) M16-1.</p>
Full article ">Figure 18
<p>Microscopic image of the transient fault zone: (<b>a</b>) M12-1; (<b>b</b>) M16-1.</p>
Full article ">
19 pages, 73341 KiB  
Article
A Comparative Study on the Use of Smartphone Cameras in Photogrammetry Applications
by Photis Patonis
Sensors 2024, 24(22), 7311; https://doi.org/10.3390/s24227311 - 15 Nov 2024
Viewed by 412
Abstract
The evaluation of smartphone camera technology for close-range photogrammetry includes assessing captured photos for 3D measurement. In this work, experiments are conducted on many smartphones to study distortion levels and accuracy performance in close-range photogrammetry applications. Analytical methods and specialized digital tools are [...] Read more.
The evaluation of smartphone camera technology for close-range photogrammetry includes assessing captured photos for 3D measurement. In this work, experiments are conducted on many smartphones to study distortion levels and accuracy performance in close-range photogrammetry applications. Analytical methods and specialized digital tools are employed to evaluate the results. OpenCV functions estimate the distortions introduced by the lens. Diagrams, evaluation images, statistical quantities, and indicators are utilized to compare the results among sensors. The accuracy achieved in photogrammetry is examined using the photogrammetric bundle adjustment in a real-world application. In the end, generalized conclusions are drawn regarding this technology’s use in close-range photogrammetry applications. Full article
Show Figures

Figure 1

Figure 1
<p>The layout of the control points used for the rectification procedure.</p>
Full article ">Figure 2
<p>A set of photographs showing the checkerboard pattern used for camera calibration.</p>
Full article ">Figure 3
<p>Distribution of Distortions for all smartphone cameras.</p>
Full article ">Figure 4
<p>Distortions color gradients to micrometers.</p>
Full article ">Figure 5
<p>Rectification Error color gradients to millimeters.</p>
Full article ">Figure 6
<p>The converged photos of the control field.</p>
Full article ">Figure 7
<p>A control point in the building’s facade. <b>Left</b>: In the actual photo. <b>Right</b>: Through the total station’s telescope.</p>
Full article ">Figure 8
<p>The transformation of the Control Point Coordinates to a New Reference System. (<b>a</b>) The new Reference Coordinate System. (<b>b</b>) The coordinates transformation on the Surveyor-Photogrammetry software version 6.1.</p>
Full article ">Figure 9
<p>An instance of the bundle adjustment function on the Surveyor-Photogrammetry software version 6.1.</p>
Full article ">Figure 10
<p>Normalized values of the Pixel size, Number of pixels, Sensor Area, RMS 3D error, Mean Rectification error, Re-projection error, and Mean Distortion.</p>
Full article ">Figure 11
<p>Scatter Charts for All Cameras with the trendline calculated by linear regression. (<b>a</b>): RMS 3D Error and Mean Distortion. (<b>b</b>) RMS 3D Error and Pixel Size. (<b>c</b>) RMS 3D Error and the Sensors Area. (<b>d</b>) Pixel Size and Sensor Area. (<b>e</b>) Pixel Size and Total Number of Pixels. (<b>f</b>) RMS 3D Error and Re-projection Error.</p>
Full article ">
15 pages, 7931 KiB  
Article
Color Models in the Process of 3D Digitization of an Artwork for Presentation in a VR Environment of an Art Gallery
by Irena Drofova and Milan Adamek
Electronics 2024, 13(22), 4431; https://doi.org/10.3390/electronics13224431 - 12 Nov 2024
Viewed by 557
Abstract
This study deals with the color reproduction of a work of art to digitize it into a 3D realistic model. The experiment aims to digitize a work of art for application in a virtual reality environment concerning faithful color reproduction. Photogrammetry and scanning [...] Read more.
This study deals with the color reproduction of a work of art to digitize it into a 3D realistic model. The experiment aims to digitize a work of art for application in a virtual reality environment concerning faithful color reproduction. Photogrammetry and scanning with a LiDAR sensor are used to compare the methods and work with colors during the reconstruction of the 3D model. An innovative tablet with a camera and LiDAR sensor is used for both methods. At the same time, current findings from the field of color vision and colorimetry are applied to 3D reconstruction. The experiment focuses on working with the RGB and L*a*b* color models and, simultaneously, on the sRGB, CIE XYZ, and Rec.2020(HDR) color spaces for transforming colors into a virtual environment. For this purpose, the color is defined in the Hex Color Value format. This experiment is a starting point for further research on color reproduction in the digital environment. This study represents a partial contribution to the much-discussed area of forgeries of works of art in current trends in forensics and forgery. Full article
(This article belongs to the Section Electronic Multimedia)
Show Figures

Figure 1

Figure 1
<p>Digitization of an art object: (<b>a</b>) 2D digitized object and detail marked in red; (<b>b</b>) matrix of partial details the yellow range of the image; and (<b>c</b>) visualization of the detail of the structure and color of a partial part of the object.</p>
Full article ">Figure 2
<p>The basic principle of the Structure from Motion (SfM) method [<a href="#B37-electronics-13-04431" class="html-bibr">37</a>].</p>
Full article ">Figure 3
<p>Creation of a 3D model using the SfM photogrammetry method: (<b>a</b>) Digitized object; (<b>b</b>) position of 24 photos from which the basic cloud of points is created; (<b>c</b>) Dense Cloud generation; (<b>d</b>) the resulting 3D texture model of the artwork.</p>
Full article ">Figure 4
<p>Creating a 3D model using a LiDAR sensor: (<b>a</b>) digitized object; (<b>b</b>) 3D model generated by Scaniverse; (<b>c</b>) 3D texture model imported into Agisoft 3D SW; and (<b>d</b>) generated point cloud from the textured 3D model.</p>
Full article ">Figure 5
<p>Generated Dense Cloud: (<b>a</b>) 3D SfM photogrammetry method and (<b>b</b>) LiDAR sensor.</p>
Full article ">Figure 6
<p>Colorimetry: (<b>a</b>) RGB color model and (<b>b</b>) sRGB color space (gamut).</p>
Full article ">Figure 7
<p>SfM—Points Segmentation #758605: (<b>a</b>) Dense Cloud 3D model using SfM photogrammetry; (<b>b</b>) segmentation points by color G#758605; (<b>c</b>) body #758605 in Dense Cloud.</p>
Full article ">Figure 8
<p>LiDAR—Segmentation of points #758605: (<b>a</b>) 3D model using LiDAR sensor; (<b>b</b>) Segmentation of points by color G#758605; (<b>c</b>) detail of the points generated in Dense Cloud.</p>
Full article ">Figure 9
<p>CIE XYZ 1931 standardized color space: (<b>a</b>) Basic ColorChecker standardized color gamut; (<b>b</b>) position of individual standardized colors in the CIE 1931 chromatic diagram; (<b>c</b>) color model L*a*b*; (<b>d</b>) CIE 1931 chromaticity diagram with Rec.2020 gamuts; sRGB and L*a*b.</p>
Full article ">Figure 10
<p>Visual comparison of reproduction quality in the process of 3D modeling and color segmentation: 3D models using the SfM photogrammetry method and 3D models using the LiDAR sensor.</p>
Full article ">Figure 11
<p>Visualization of a realistic 3D reconstruction of the artwork: (<b>a</b>) 3D Dense Cloud model; (<b>b</b>) 3D texture model by the SfM method; (<b>c</b>) 3D texture model by LiDAR sensor.</p>
Full article ">
15 pages, 7754 KiB  
Article
The Effects of Localized Heating and Ethephon Application on Cambial Reactivation, Vessel Differentiation, and Resin Canal Development in Lacquer Tree, Toxicodendron vernicifluum, from Winter to Spring
by Novena Puteri Tiyasa, Md Hasnat Rahman, Satoshi Nakaba and Ryo Funada
Forests 2024, 15(11), 1977; https://doi.org/10.3390/f15111977 - 8 Nov 2024
Viewed by 409
Abstract
Resin canals serve as a natural feature with the function of a defense system against fungi, bacteria, and insects. Trees can form these canals in response to mechanical injury and ecological disturbance. Factors, such as plant hormones and temperature, influence cambial activity and [...] Read more.
Resin canals serve as a natural feature with the function of a defense system against fungi, bacteria, and insects. Trees can form these canals in response to mechanical injury and ecological disturbance. Factors, such as plant hormones and temperature, influence cambial activity and cell differentiation. This study examined the effects of increased temperature and plant hormones on cambial reactivation, vessel formation, and resin canal formation using localized heating and the application of the ethylene generator ethephon to dormant stems of the Toxicodendron vernicifluum seedlings. Localized heating was achieved by wrapping an electric heating ribbon around dormant stems, while ethephon was applied to the bark surface. Treatment was initiated on 29 January 2021, including control, heating, ethephon, and a combination of heating and ethephon. Cambial reactivation and resin canal formation were monitored using light microscopy, and bud growth was recorded with a digital camera. Localized heating induced earlier phloem reactivation, cambial reactivation, and xylem differentiation, increasing the number of vessels. The application of exogenous ethylene delayed these processes. The combination of localized heating and exogenous ethylene application resulted in smaller vessels and larger resin canals. These results suggest that increased temperature plays a significant role in cambial reactivation and vessel formation in ring-porous hardwood and that ethylene affects vessel differentiation and resin canal development. Full article
(This article belongs to the Section Wood Science and Forest Products)
Show Figures

Figure 1

Figure 1
<p>Meteorological data showing the maximum, average, and minimum daily air temperatures at the experimental site in Fuchu, Tokyo, Japan, from January to March 2021. Dotted line ① indicates the timing of the initial phloem cell division in heat-treated seedlings. Dotted line ② indicates the timing of the initial phloem cell division in control and ethephon-treated seedlings and the timing of the cambial reactivation in heat-plus-ethephon-treated seedlings.</p>
Full article ">Figure 2
<p>Apical bud conditions of <span class="html-italic">T. vernicifluum</span> seedlings 14 days, 28 days, 42 days, and 56 days after treatments. Dormant apical buds were observed until 26 February 2021 in all treatments. Bud swelling was observed on 12 March 2021, and leaf development and the start of shoot extension began on 26 March 2021 in all treatments.</p>
Full article ">Figure 3
<p>Light micrographs showing transverse sections of <span class="html-italic">T. vernicifluum</span> on 29 January 2021, with (<b>A</b>) low magnification and (<b>B</b>) high magnification. White arrows indicate resin canals in the phloem area. Ph: phloem; Ca: cambial zone; Xy: xylem. Scale: (<b>A</b>) 100 μm, (<b>B</b>) 25 μm.</p>
Full article ">Figure 4
<p>Light micrographs showing the cambial zone in transverse sections of (<b>A</b>) control seedlings, (<b>B</b>) heat-treated seedlings, (<b>C</b>) ethephon-treated seedlings, and (<b>D</b>) heat-plus-ethephon-treated seedlings of <span class="html-italic">T. vernicifluum</span> on 12 February 2021. Dormant cambial cells were observed in all treatments; however, initial phloem cell division was detected in the heat-treated seedling, indicated by the yellow arrow. Ph: phloem; Ca: cambial zone; Xy: xylem; Rc: resin canal. Scale: 50 μm.</p>
Full article ">Figure 5
<p>Light micrographs showing cambial activity in transverse sections of (<b>A</b>) control seedlings, (<b>B</b>) heat-treated seedlings, (<b>C</b>) ethephon-treated seedlings, and (<b>D</b>) heat-plus-ethephon-treated seedlings of <span class="html-italic">T. vernicifluum</span> on 26 February2021. Cell division was observed in the phloem and cambium in all treatments. Differentiating vessel elements were only detected in the heat-treated seedling. Yellow arrows indicate cell division in the phloem and cambium. A red arrow indicates a differentiating vessel element. Ph: phloem; Ca: cambial zone; Xy: xylem. Scale: 50 μm.</p>
Full article ">Figure 6
<p>Light micrographs showing cambial activity in transverse sections of (<b>A</b>) control seedlings, (<b>B</b>) heat-treated seedlings, (<b>C</b>) ethephon-treated seedlings, and (<b>D</b>) heat-plus-ethephon-treated seedlings of <span class="html-italic">T. vernicifluum</span> on 12 March 2021. Cambial cell division was observed in all treatments. Differentiating vessel elements were detected in the control, heat-treated, and heat-plus-ethephon-treated seedlings. Yellow arrows indicate cambial cell division. Red arrows indicate differentiating vessel elements. Ph: phloem; Ca: cambial zone; Xy: xylem; Rc: resin canal. Scale: 50 μm.</p>
Full article ">Figure 7
<p>Light micrographs showing cambial activity in transverse sections of (<b>A</b>) control seedlings, (<b>B</b>) heat-treated seedlings, (<b>C</b>) ethephon-treated seedlings, and (<b>D</b>) heat-plus-ethephon-treated seedlings of <span class="html-italic">T. vernicifluum</span> on 26 March 2021. Cambial cell division and differentiating vessel elements were observed in all treatments. Yellow arrows indicate cambial cell division. Red arrows indicate differentiating vessel elements. Ph: phloem; Ca: cambial zone; Xy: xylem. Scale: 50 μm.</p>
Full article ">Figure 8
<p>Graphs showing (<b>A</b>) xylem width, (<b>B</b>) number of vessels, (<b>C</b>) diameter of vessels, and (<b>D</b>) area of vessels in the current year’s xylem of <span class="html-italic">T. vernicifluum</span> on 26 March 2021 (n = 3). Columns and bars show mean values ± s.d. Means with the different letters are significantly different at <span class="html-italic">p</span> &lt; 0.05 (one-way ANOVA, Tukey’s HSD test).</p>
Full article ">Figure 9
<p>Light micrographs showing transverse sections of (<b>A</b>) control seedlings, (<b>B</b>) heat-treated seedlings, (<b>C</b>) ethephon-treated seedlings, and (<b>D</b>) heat-plus-ethephon-treated seedlings of <span class="html-italic">T. vernicifluum</span> on 26 March 2021. Ph: phloem; Ca: cambial zone; Rc: resin canal, NXy: current year’s xylem. Scale: 100 μm.</p>
Full article ">Figure 10
<p>Graph showing (<b>A</b>) number and (<b>B</b>) diameter of the resin canals of <span class="html-italic">T. vernicifluum</span> on 26 March 2021 (n = 3). Columns and bars show mean values ± s.d. Means with the different letters are significantly different at <span class="html-italic">p</span> &lt; 0.05 (one-way ANOVA, Tukey’s HSD test).</p>
Full article ">
12 pages, 1275 KiB  
Article
A Simple and Green Analytical Alternative for Chloride Determination in High-Salt-Content Crude Oil: Combining Miniaturized Extraction with Portable Colorimetric Analysis
by Alice P. Holkem, Giuliano Agostini, Adilson B. Costa, Juliano S. Barin and Paola A. Mello
Processes 2024, 12(11), 2425; https://doi.org/10.3390/pr12112425 - 3 Nov 2024
Viewed by 898
Abstract
A simple and miniaturized protocol was developed for chloride extraction from Brazilian pre-salt crude oil for further salt determination by colorimetry. In this protocol, the colorimetric analysis of chloride using digital images was carried out in an aqueous phase obtained after a simple [...] Read more.
A simple and miniaturized protocol was developed for chloride extraction from Brazilian pre-salt crude oil for further salt determination by colorimetry. In this protocol, the colorimetric analysis of chloride using digital images was carried out in an aqueous phase obtained after a simple and miniaturized extraction carefully developed for this purpose. A portable device composed of a homemade 3D-printed chamber with a USB camera was used. The PhotoMetrix app converted the images into RGB histograms, and a partial least squares (PLS) model was obtained from chemometric treatment. The sample preparation was performed by extraction after defining the best conditions for the main parameters (e.g., extraction time, temperature, type and volume of solvent, and sample mass). The PLS model was evaluated considering the coefficient of determination (R2) and the root mean square errors (RMSEs)—calibration (RMSEC), cross-validation (RMSECV), and prediction (RMSEP). Under the optimized conditions, an extraction efficiency higher than 84% was achieved, and the limit of quantification was 1.6 mg g−1. The chloride content obtained in the pre-salt crude oils ranged from 3 to 15 mg g−1, and no differences (ANOVA, 95%) were observed between the results and the reference values by direct solid sampling elemental analysis (DSS-EA) or the ASTM D 6470 standard method. The easy-to-use colorimetric analysis combined with the extraction method’s simplicity offered a high-throughput, low-cost, and environmentally friendly method, with the possibility of portability. Additionally, the decrease in energy consumption and waste generation, increasing the sample throughput and operators’ safety, makes the proposed method a greener approach. Furthermore, the cost savings make this a suitable option for routine quality control, which can be attractive in the crude oil industry. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic showing the apparatus optimized in this work for colorimetric analysis of chloride in crude oil aqueous extracts obtained by a miniaturized sample preparation protocol.</p>
Full article ">Figure 2
<p>Results for chloride in aqueous extracts from crude oils produced by colorimetric analysis with the portable device or by potentiometry (0.7 g of sample, 35 min at 55 °C, with 1 mL of ethyl acetate as the solvent and 5 mL of water as the extraction solution), and reference values produced by DSS-EA (results in mg g<sup>−1</sup>, mean ± standard deviation, n = 3).</p>
Full article ">Figure 3
<p>Scores for the methods for chloride determination in crude oil by AGREEprep analysis [<a href="#B43-processes-12-02425" class="html-bibr">43</a>]. (<b>A</b>) Proposed extraction–colorimetric method, (<b>B</b>) ASTM D 6470 standard method, and (<b>C</b>) DSS-EA.</p>
Full article ">
29 pages, 50680 KiB  
Article
Relative Radiometric Correction Method Based on Temperature Normalization for Jilin1-KF02
by Shuai Huang, Song Yang, Yang Bai, Yingshan Sun, Bo Zou, Hongyu Wu, Lei Zhang, Jiangpeng Li and Xiaojie Yang
Remote Sens. 2024, 16(21), 4096; https://doi.org/10.3390/rs16214096 - 2 Nov 2024
Viewed by 643
Abstract
The optical remote sensors carried by the Jilin-1 KF02 series satellites have an imaging resolution better than 0.5 m and a width of 150 km. There are radiometric problems, such as stripe noise, vignetting, and inter-slice chromatic aberration, in their raw images. In [...] Read more.
The optical remote sensors carried by the Jilin-1 KF02 series satellites have an imaging resolution better than 0.5 m and a width of 150 km. There are radiometric problems, such as stripe noise, vignetting, and inter-slice chromatic aberration, in their raw images. In this paper, a relative radiometric correction method based on temperature normalization is proposed for the response characteristics of sensors and the structural characteristics of optical splicing of Jilin-1 KF02 series satellites cameras. Firstly, a model of temperature effect on sensor output is established to correct the variation of sensor response output digital number (DN) caused by temperature variation during imaging process, and the image is normalized to a uniform temperature reference. Then, the horizontal stripe noise of the image is eliminated by using the sensor scan line and dark pixel information, and the vertical stripe noise of the image is eliminated by using the method of on-orbit histogram statistics. Finally, the method of superposition compensation is used to correct the vignetting area at the edge of the image due to the lack of energy information received by the sensor so as to ensure the consistency of the image in color and image quality. The proposed method is verified by Jilin-1 KF02A on-orbit images. Experimental results show that the image response is uniform, the color is consistent, the average Streak Metrics (SM) is better than 0.1%, Root-Mean-Square Deviation of the Mean Line (RA) and Generalized Noise (GN) are better than 2%, Relative Average Spectral Error (RASE) and Relative Average Spectral Error (ERGAS) are greatly improved, which are better than 5% and 13, respectively, and the relative radiation quality is obviously improved after relative radiation correction. Full article
(This article belongs to the Special Issue Optical Remote Sensing Payloads, from Design to Flight Test)
Show Figures

Figure 1

Figure 1
<p>Temperature drift characteristics of the sensor. (<b>a</b>) DN of the sensor at different temperatures; (<b>b</b>) DN of some pixels at different temperatures.</p>
Full article ">Figure 2
<p>Stripe noises in level 0 images of Jilin-1 KF02A. (<b>a</b>) Raw panchromatic image; (<b>b</b>) Raw multispectral image.</p>
Full article ">Figure 3
<p>Vignetting and chromatic aberration in a level 0 image of Jilin-1 KF02A.</p>
Full article ">Figure 4
<p>Jilin-1 KF02A level 0 images of different ground object scenes. (<b>a</b>) Desert; (<b>b</b>) Vegetation; (<b>c</b>) Coast; (<b>d</b>) City; (<b>e</b>) Mountain; (<b>f</b>) Snow.</p>
Full article ">Figure 4 Cont.
<p>Jilin-1 KF02A level 0 images of different ground object scenes. (<b>a</b>) Desert; (<b>b</b>) Vegetation; (<b>c</b>) Coast; (<b>d</b>) City; (<b>e</b>) Mountain; (<b>f</b>) Snow.</p>
Full article ">Figure 5
<p>Workflow of relative radiometric correction method.</p>
Full article ">Figure 6
<p>Fitting results of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>k</mi> </mrow> <mrow> <mi>T</mi> </mrow> </msub> </mrow> </semantics></math> for each pixel. (<b>a</b>) Panchromatic band; (<b>b</b>) Multispectral bands.</p>
Full article ">Figure 7
<p>Fitting results of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>v</mi> </mrow> <mrow> <mi>T</mi> </mrow> </msub> </mrow> </semantics></math> for each pixel. (<b>a</b>) Panchromatic band; (<b>b</b>) Multispectral bands.</p>
Full article ">Figure 8
<p>Fitting results of model coefficients for each pixel. (<b>a</b>) Coefficient <math display="inline"><semantics> <mrow> <mi>a</mi> </mrow> </semantics></math>; (<b>b</b>) Coefficient <math display="inline"><semantics> <mrow> <mi>c</mi> </mrow> </semantics></math>; (<b>c</b>) Coefficient <math display="inline"><semantics> <mrow> <mi>b</mi> </mrow> </semantics></math>; (<b>d</b>) Coefficient <math display="inline"><semantics> <mrow> <mi>d</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>Goodness of fit of the model for each pixel. (<b>a</b>) Goodness of fit of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>k</mi> </mrow> <mrow> <mi>T</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>b</b>) Goodness of fit of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>v</mi> </mrow> <mrow> <mi>T</mi> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>The diagram of the histogram matching method.</p>
Full article ">Figure 11
<p>The average DN curves per column of the sensor at different energy levels. (<b>a</b>) DN of odd row pixels; (<b>b</b>) DN of even row pixels.</p>
Full article ">Figure 12
<p>The diagram of pixel distribution transformation relationship.</p>
Full article ">Figure 13
<p>The diagram of superposition compensation in vignetting imaging region.</p>
Full article ">Figure 14
<p>Comparison of correction effects in vignetting area. (<b>a</b>) Raw image; (<b>b</b>) Direct coefficient correction; (<b>c</b>) Superposition correction.</p>
Full article ">Figure 15
<p>The calculation results of the raw image. (<b>a</b>) Response energy ratio; (<b>b</b>) Response stability.</p>
Full article ">Figure 16
<p>The calculation results of the temperature normalization image. (<b>a</b>) Response energy ratio; (<b>b</b>) Response stability.</p>
Full article ">Figure 17
<p>Comparison of correction effects in the vignetting area of vegetation. (<b>a</b>) Direct coefficient correction; (<b>b</b>) Superposition correction.</p>
Full article ">Figure 18
<p>Comparison of correction effects in the vignetting area of bare soil. (<b>a</b>) Direct coefficient correction; (<b>b</b>) Superposition correction.</p>
Full article ">Figure 19
<p>Comparison of correction effects in the vignetting area of city. (<b>a</b>) Direct coefficient correction; (<b>b</b>) Superposition correction.</p>
Full article ">Figure 20
<p>Comparison of correction effects in the vignetting area of desert. (<b>a</b>) Direct coefficient correction; (<b>b</b>) Superposition correction.</p>
Full article ">Figure 21
<p>Raw and corrected images of a city. (<b>a</b>) Raw image; (<b>b</b>) Corrected without temperature normalization; (<b>c</b>) Corrected using the proposed method.</p>
Full article ">Figure 22
<p>Raw and corrected images of a mountain. (<b>a</b>) Raw image; (<b>b</b>) Corrected without temperature normalization; (<b>c</b>) Corrected using the proposed method.</p>
Full article ">Figure 23
<p>Raw and corrected images of a desert. (<b>a</b>) Raw image; (<b>b</b>) Corrected without temperature normalization; (<b>c</b>) Corrected using the proposed method.</p>
Full article ">Figure 24
<p>Raw and corrected images of water. (<b>a</b>) Raw image; (<b>b</b>) Corrected without temperature normalization; (<b>c</b>) Corrected using the proposed method.</p>
Full article ">Figure 25
<p>Raw and corrected images of vegetation. (<b>a</b>) Raw image; (<b>b</b>) Corrected without temperature normalization; (<b>c</b>) Corrected using the proposed method.</p>
Full article ">Figure 26
<p>Comparison of SM calculation results.</p>
Full article ">
18 pages, 3127 KiB  
Article
Precise Geoid Determination in the Eastern Swiss Alps Using Geodetic Astronomy and GNSS/Leveling Methods
by Müge Albayrak, Urs Marti, Daniel Willi, Sébastien Guillaume and Ryan A. Hardy
Sensors 2024, 24(21), 7072; https://doi.org/10.3390/s24217072 - 2 Nov 2024
Viewed by 680
Abstract
Astrogeodetic deflections of the vertical (DoVs) are close indicators of the slope of the geoid. Thus, DoVs observed along horizontal profiles may be integrated to create geoid undulation profiles. In this study, we collected DoV data in the Eastern Swiss Alps using a [...] Read more.
Astrogeodetic deflections of the vertical (DoVs) are close indicators of the slope of the geoid. Thus, DoVs observed along horizontal profiles may be integrated to create geoid undulation profiles. In this study, we collected DoV data in the Eastern Swiss Alps using a Swiss Digital Zenith Camera, the COmpact DIgital Astrometric Camera (CODIAC), and two total station-based QDaedalus systems. In the mountainous terrain of the Eastern Swiss Alps, the geoid profile was established at 15 benchmarks over a two-week period in June 2021. The elevation along the profile ranges from 1185 to 1800 m, with benchmark spacing ranging from 0.55 km to 2.10 km. The DoV, gravity, GNSS, and levelling measurements were conducted on these 15 benchmarks. The collected gravity data were primarily used for corrections of the DoV-based geoid profiles, accounting for variations in station height and the geoid-quasigeoid separation. The GNSS/levelling and DoV data were both used to compute geoid heights. These geoid heights are compared with the Swiss Geoid Model 2004 (CHGeo2004) and two global gravity field models (EGM2008 and XGM2019e). Our study demonstrates that absolute geoid heights derived from GNSS/leveling data achieve centimeter-level accuracy, underscoring the precision of this method. Comparisons with CHGeo2004 predictions reveal a strong correlation, closely aligning with both GNSS/leveling and DoV-derived results. Additionally, the differential geoid height analysis highlights localized variations in the geoid surface, further validating the robustness of CHGeo2004 in capturing fine-scale geoid heights. These findings confirm the reliability of both absolute and differential geoid height calculations for precise geoid modeling in complex mountainous terrains. Full article
(This article belongs to the Section State-of-the-Art Sensors Technologies)
Show Figures

Figure 1

Figure 1
<p>Surses region geoid profile in the Eastern Swiss Alps, where DoV, GNSS, leveling, and gravity observations were carried out on 15 benchmarks. The Shuttle Radar Topography Mission (SRTM) is visualized along the profile.</p>
Full article ">Figure 2
<p>The data collection and analysis scheme for this study, including the types of datasets used and the flow of information in the reduction and processing of these data points into geoid profiles.</p>
Full article ">Figure 3
<p>Free-air anomaly (red) versus the orthometric heights (blue).</p>
Full article ">Figure 4
<p>Geoid profile corrections (relative to the first mark) added to the quasigeoid profile derived from observed deflections of the vertical (DoV). These include the dynamic correction derived from observed gravity disturbances and the geoid-quasigeoid separation term.</p>
Full article ">Figure 5
<p>(<b>a</b>) Absolute geoid height results from GNSS/leveling and DoV compared with CHGeo2004, EGM2008, and XGM2019e. (<b>b</b>) The same results relative to the first mark in the profile. (<b>c</b>) The same results minus CHGeo2004.</p>
Full article ">Figure 6
<p>Differential geoid height calculation results from DoV, GNSS/leveling, EGM2008, and XGM2019e. To remove long-wavelength signals, each point shows the difference in geoid height between a point and the previous point in the survey.</p>
Full article ">
21 pages, 12870 KiB  
Article
Consumer Usability Test of Mobile Food Safety Inquiry Platform Based on Image Recognition
by Jun-Woo Park, Young-Hee Cho, Mi-Kyung Park and Young-Duk Kim
Sustainability 2024, 16(21), 9538; https://doi.org/10.3390/su16219538 - 1 Nov 2024
Viewed by 671
Abstract
Recently, as the types of imported food and the design of their packaging become more complex and diverse, digital recognition technologies such as barcodes, QR (quick response) codes, and OCR (optical character recognition) are attracting attention in order to quickly and easily check [...] Read more.
Recently, as the types of imported food and the design of their packaging become more complex and diverse, digital recognition technologies such as barcodes, QR (quick response) codes, and OCR (optical character recognition) are attracting attention in order to quickly and easily check safety information (e.g., food ingredient information and recalls). However, consumers are still exposed to inaccurate and inconvenient situations because legacy technologies require dedicated terminals or include information other than safety information. In this paper, we propose a deep learning-based packaging recognition system which can easily and accurately determine food safety information with a single image captured through a smartphone camera. The detection algorithm learned a total of 100 kinds of product images and optimized YOLOv7 to secure an accuracy of over 95%. In addition, a new SUS (system usability scale)-based questionnaire was designed and conducted on 71 consumers to evaluate the usability of the system from the individual consumer’s perspective. The questionnaire consisted of three categories, namely convenience, accuracy, and usefulness, and each received a score of at least 77, which confirms that the proposed system has excellent overall usability. Moreover, in terms of task completion rate and task completion time, the proposed system is superior when it compared to existing QR code- or Internet-based recognition systems. These results demonstrate that the proposed system provides consumers with more convenient and accurate information while also confirming the sustainability of smart food consumption. Full article
Show Figures

Figure 1

Figure 1
<p>The architecture of image recognition.</p>
Full article ">Figure 2
<p>Query process via mobile app.</p>
Full article ">Figure 3
<p>Query result for unsafe food.</p>
Full article ">Figure 4
<p>Various shooting methods: (<b>a</b>) directions of light; (<b>b</b>) types of lamps; (<b>c</b>) angles of shooting; (<b>d</b>) subject locations.</p>
Full article ">Figure 5
<p>Examples of taken photos.</p>
Full article ">Figure 6
<p>Compound scaling up depth and width for concatenation-based model [<a href="#B23-sustainability-16-09538" class="html-bibr">23</a>].</p>
Full article ">Figure 7
<p>Real-time food classification processing system using YOLOv7.</p>
Full article ">Figure 8
<p>FastAPI-based web server interface structure.</p>
Full article ">Figure 9
<p>IoU definition.</p>
Full article ">Figure 10
<p>Data split for training, validation, and testing.</p>
Full article ">Figure 11
<p>Results of training.</p>
Full article ">Figure 11 Cont.
<p>Results of training.</p>
Full article ">Figure 12
<p>Random image test.</p>
Full article ">Figure 13
<p>Similar image test: (<b>a</b>) vanilla wafers vs. cacao wafers; (<b>b</b>) green olives vs. black olives.</p>
Full article ">Figure 14
<p>Conducting usability tests: (<b>a</b>) samples of food products; (<b>b</b>) test via smartphone.</p>
Full article ">Figure 15
<p>Checking for recalls via the FDA web pages.</p>
Full article ">Figure 16
<p>SUS scores with different ages and academic levels. (<b>a</b>) comparison between different ages; (<b>b</b>) comparison between different academic levels.</p>
Full article ">Figure 17
<p>Comparison of recognition results: (<b>a</b>) success; (<b>b</b>) failure.</p>
Full article ">
24 pages, 9406 KiB  
Article
Lightweight Digit Recognition in Smart Metering System Using Narrowband Internet of Things and Federated Learning
by Vladimir Nikić, Dušan Bortnik, Milan Lukić, Dejan Vukobratović and Ivan Mezei
Future Internet 2024, 16(11), 402; https://doi.org/10.3390/fi16110402 - 31 Oct 2024
Viewed by 1932
Abstract
Replacing mechanical utility meters with digital ones is crucial due to the numerous benefits they offer, including increased time resolution in measuring consumption, remote monitoring capabilities for operational efficiency, real-time data for informed decision-making, support for time-of-use billing, and integration with smart grids, [...] Read more.
Replacing mechanical utility meters with digital ones is crucial due to the numerous benefits they offer, including increased time resolution in measuring consumption, remote monitoring capabilities for operational efficiency, real-time data for informed decision-making, support for time-of-use billing, and integration with smart grids, leading to enhanced customer service, reduced energy waste, and progress towards environmental sustainability goals. However, the cost associated with replacing mechanical meters with their digital counterparts is a key factor contributing to the relatively slow roll-out of such devices. In this paper, we present a low-cost and power-efficient solution for retrofitting the existing metering infrastructure, based on state-of-the-art communication and artificial intelligence technologies. The edge device we developed contains a camera for capturing images of a dial meter, a 32-bit microcontroller capable of running the digit recognition algorithm, and an NB-IoT module with (E)GPRS fallback, which enables nearly ubiquitous connectivity even in difficult radio conditions. Our digit recognition methodology, based on the on-device training and inference, augmented with federated learning, achieves a high level of accuracy (97.01%) while minimizing the energy consumption and associated communication overhead (87 μWh per day on average). Full article
Show Figures

Figure 1

Figure 1
<p>Proposed SM architecture used for old TM retrofitting.</p>
Full article ">Figure 2
<p>The system architecture consists of a collection of deployed SMs that communicate via MNO with the cloud.</p>
Full article ">Figure 3
<p>Distribution of ML models on edge devices, which provides the basis for FL.</p>
Full article ">Figure 4
<p>Design and components of edge device is depicted on the left, whereas right images display fabricated devices.</p>
Full article ">Figure 5
<p>View of metering device through camera lens of the edge device used for creation of datasets.</p>
Full article ">Figure 6
<p>Conversion to B/W image format using a fixed threshold: TH = 128 (<b>left</b>), TH = 192 (<b>right</b>).</p>
Full article ">Figure 7
<p>Initial image taken using camera on edge device, intermediate image after B/W conversion and the final image without artifacts.</p>
Full article ">Figure 8
<p>Scheme used for detecting differences between two digit images.</p>
Full article ">Figure 9
<p>Proposed CNN architecture used for digit recognition.</p>
Full article ">Figure 10
<p>The first test case comprises two distinct scenarios representing different training methodologies, one incorporating federated learning (FL) and the other without its use. (<b>a</b>) Scenario 1, which does not use FL. (<b>b</b>) Scenario 2, which utilizes averaging methodology for FL.</p>
Full article ">Figure 11
<p>Second test case displaying training scheme where second batch of devices is trained based on results of training on first batch.</p>
Full article ">Figure 12
<p>Power consumption profile of image capture + preprocessing + inference.</p>
Full article ">Figure 13
<p>Power consumption profile of data packet transmission via NB-IoT.</p>
Full article ">
16 pages, 2578 KiB  
Article
The Photometric Testing of High-Resolution Digital Cameras from Smartphones—A Pilot Study
by Sławomir Zalewski and Krzysztof Skarżyński
Sensors 2024, 24(21), 6936; https://doi.org/10.3390/s24216936 - 29 Oct 2024
Viewed by 480
Abstract
Luminance is the fundamental photometric quantity representing the technical meaning of brightness. It is usually measured from a distance using a matrix sensor, which is the basis of the professional instrument. However, specific technical requirements must be fulfilled to achieve accurate results. This [...] Read more.
Luminance is the fundamental photometric quantity representing the technical meaning of brightness. It is usually measured from a distance using a matrix sensor, which is the basis of the professional instrument. However, specific technical requirements must be fulfilled to achieve accurate results. This paper considers whether modern high-resolution smartphone cameras are suitable for luminance measurements. Three cameras from high-end smartphones were evaluated on a dedicated laboratory stand. The sensors’ output characteristics showed relatively good linearity of the individual R, G, and B channels. Unfortunately, the spectral sensitivities were unfavorable, as the minimum error achieved was about 17%. This device is classified outside the generally accepted quality scale for photometric instruments. The presented investigation confirmed that none of the high-resolution smartphone cameras tested was suitable for use as a universal luminance camera. However, one of the test devices can be developmental if restrictively calibrated and used only in a specialistic laboratory stand. Using a smartphone (or only its camera) for luminance measurements requires proper advanced calibration. It is possible, but it limits us to only dedicated applications. The pilot study presented in this paper will help create a suitable test stand for spectacle vision systems, e.g., virtual reality equipment. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

Figure 1
<p>Relative spectral sensitivities of L, M, and S receptors and photopic vision of the human eye.</p>
Full article ">Figure 2
<p>The scheme of the laboratory stand: PB—photometric bench, PS—power supply, L—light source (illuminant A—2856K), L—focusing lens, B—baffle, F—interference filter, D—diffuser, S<sub>1</sub>—place for spectroradiometer head, S<sub>2</sub>—place for the smartphone, LM—luminance meter, ΔD<sub>1</sub>—changeable distance between light source and diffuser to provide proper luminance value, D<sub>2</sub>—constant distance between diffuser and smartphone, ΔL—changeable luminance (for linearity), ΔSPD (for spectral sensitivity).</p>
Full article ">Figure 3
<p>The spectral power densities emitted by the test surface using different interference filters.</p>
Full article ">Figure 4
<p>Linearity responses for channels of tested smartphones: (<b>a</b>) Smartphone 1, (<b>b</b>) Smartphone 2, (<b>c</b>) Smartphone 3.</p>
Full article ">Figure 4 Cont.
<p>Linearity responses for channels of tested smartphones: (<b>a</b>) Smartphone 1, (<b>b</b>) Smartphone 2, (<b>c</b>) Smartphone 3.</p>
Full article ">Figure 5
<p>The value of the proportionality k-factor as a function of the logarithm of luminance for the smartphones tested: (<b>a</b>) Smartphone 1, (<b>b</b>) Smartphone 2, (<b>c</b>) Smartphone 3.</p>
Full article ">Figure 5 Cont.
<p>The value of the proportionality k-factor as a function of the logarithm of luminance for the smartphones tested: (<b>a</b>) Smartphone 1, (<b>b</b>) Smartphone 2, (<b>c</b>) Smartphone 3.</p>
Full article ">Figure 6
<p>Matching measured R, G, and B spectral responses of particular smartphones to L, M, and S human eye receptor sensitivities: (<b>a</b>) to receptor L, (<b>b</b>) to receptor M, (<b>c</b>) to receptor S.</p>
Full article ">Figure 7
<p>Matching calculated smartphone spectral responses to the photopic luminous efficiency function of a human eye.</p>
Full article ">
20 pages, 9894 KiB  
Article
Estimation of Strawberry Canopy Volume in Unmanned Aerial Vehicle RGB Imagery Using an Object Detection-Based Convolutional Neural Network
by Min-Seok Gang, Thanyachanok Sutthanonkul, Won Suk Lee, Shiyu Liu and Hak-Jin Kim
Sensors 2024, 24(21), 6920; https://doi.org/10.3390/s24216920 - 28 Oct 2024
Viewed by 609
Abstract
Estimating canopy volumes of strawberry plants can be useful for predicting yields and establishing advanced management plans. Therefore, this study evaluated the spatial variability of strawberry canopy volumes using a ResNet50V2-based convolutional neural network (CNN) model trained with RGB images acquired through manual [...] Read more.
Estimating canopy volumes of strawberry plants can be useful for predicting yields and establishing advanced management plans. Therefore, this study evaluated the spatial variability of strawberry canopy volumes using a ResNet50V2-based convolutional neural network (CNN) model trained with RGB images acquired through manual unmanned aerial vehicle (UAV) flights equipped with a digital color camera. A preprocessing method based on the You Only Look Once v8 Nano (YOLOv8n) object detection model was applied to correct image distortions influenced by fluctuating flight altitude under a manual maneuver. The CNN model was trained using actual canopy volumes measured using a cylindrical case and small expanded polystyrene (EPS) balls to account for internal plant spaces. Estimated canopy volumes using the CNN with flight altitude compensation closely matched the canopy volumes measured with EPS balls (nearly 1:1 relationship). The model achieved a slope, coefficient of determination (R2), and root mean squared error (RMSE) of 0.98, 0.98, and 74.3 cm3, respectively, corresponding to an 84% improvement over the conventional paraboloid shape approximation. In the application tests, the canopy volume map of the entire strawberry field was generated, highlighting the spatial variability of the plant’s canopy volumes, which is crucial for implementing site-specific management of strawberry crops. Full article
(This article belongs to the Special Issue Feature Papers in Smart Agriculture 2024)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Manual UAV flight over the strawberry experimental field. (<b>b</b>) RGB images at a resolution of 1920 × 1080 pixels were extracted and resized from the UAV video frames captured during manual UAV flight.</p>
Full article ">Figure 2
<p>A flowchart of the object detection-based image preprocessing to resize the images using the ratio of the number of counted plants and missing plants to the average number of plants and missing plants across all images (Equations (1)–(3)).</p>
Full article ">Figure 3
<p>(<b>a</b>) Cropped and resized ROI image of individual sample plants. (<b>b</b>) Sample plant images centered on a 512 × 512-pixel mulch background.</p>
Full article ">Figure 4
<p>Description of the CNN model for estimating canopy volume, adopted from Gang et al. [<a href="#B34-sensors-24-06920" class="html-bibr">34</a>] with modifications. The model utilizes a pre-trained ResNet50V2 [<a href="#B55-sensors-24-06920" class="html-bibr">55</a>] with ImageNet [<a href="#B56-sensors-24-06920" class="html-bibr">56</a>] as the backbone, with two convolutional layers used for preprocessing, followed by a fully connected layer for regression.</p>
Full article ">Figure 5
<p>An acrylic cylindrical case with a strawberry plant filled with EPS balls to calculate the volume difference between the number of balls with and without plants.</p>
Full article ">Figure 6
<p>An overview of the development and testing process for estimating strawberry canopy volume in UAV RGB imagery using an object detection-based CNN.</p>
Full article ">Figure 7
<p>(<b>a</b>) Comparison of canopy volumes estimated from the linear regression model using a paraboloid shape and the actual canopy volumes measured using EPS balls. (<b>b</b>) Comparison of canopy volumes estimated from the developed model using RGB test dataset images and measured canopy volumes with height compensation using the object detection model. The color symbols represent the values for each strawberry variety. The dashed line shows the regression line.</p>
Full article ">Figure 8
<p>Canopy volumes estimated from the developed model using RGB test dataset images and measured canopy volumes without height compensation using the object detection model. The color symbols indicate values for each strawberry variety. The dashed line shows the regression line.</p>
Full article ">Figure 9
<p>Comparison of canopy volumes estimated from the developed model and the actual canopy volumes measured using EPS balls, (<b>a</b>) when 100% of canopy volume converted from canopy fullness level was used as target value and (<b>b</b>) when a 50/50 mix of converted canopy volume and canopy volume measured using EPS balls was used as target value. The color indices denote the values for each strawberry variety. The dashed lines show the regression lines.</p>
Full article ">Figure 10
<p>A part of the canopy volume distribution map in the entire field.</p>
Full article ">Figure 11
<p>Canopy volume distribution of the Brilliance variety from 2 to 23 February 2024.</p>
Full article ">Figure 12
<p>Canopy volume distribution of the Medallion variety from 2 to 23 February 2024.</p>
Full article ">Figure 13
<p>Box plots of weekly canopy volumes of the sampled plants measured using the EPS balls: (<b>a</b>) Brilliance variety; (<b>b</b>) Medallion variety for 40 sampled plants.</p>
Full article ">
11 pages, 6302 KiB  
Article
Crack Growth Analysis of a Welded Centre Sill in a Hopper Wagon
by Daren Peng, Rhys Jones and Andrew S. M. Ang
Appl. Mech. 2024, 5(4), 762-772; https://doi.org/10.3390/applmech5040042 - 25 Oct 2024
Viewed by 874
Abstract
This paper mainly studies the fatigue cracks growth of fillet weld specimens in a fashion that is consistent with that used to assess the fatigue performance of complex aerospace structures under operational flight loads. The fatigue test loads were determined using the overall [...] Read more.
This paper mainly studies the fatigue cracks growth of fillet weld specimens in a fashion that is consistent with that used to assess the fatigue performance of complex aerospace structures under operational flight loads. The fatigue test loads were determined using the overall finite element analysis results of the hopper wagon. The actual applied test loads were monitored using strain gauges. The residual stress in the critical region was determined by combining the stress field of the welded specimen obtained by a thermal imager under cyclic loading with the results of the three-dimensional finite element analysis of the specimen. During the fatigue test, a digital camera (with microscope lens) was used in conjunction with infrared measurement technology to obtain the crack growth information. As in prior studies, the three dimensional finite element alternating technique was used to calculate the stress intensity factor in the critical area of the crack in the fillet weld specimen. The Hartman–Schijve crack growth equation was then used, in conjunction with the calculated stress intensity factor solutions, to compute the crack growth history in a fatigue test of a critical welded component in a hopper wagon. The resultant computed crack growth histories are relatively consistent with the test results. Full article
Show Figures

Figure 1

Figure 1
<p>The resultant mesh and maximum principal stress contour for the hopper wagon. The stress units are in Pa.</p>
Full article ">Figure 2
<p>The local maximum principal stress contour in the crack-initiated region of the centre sill under draft loading. The stress units are in Pa.</p>
Full article ">Figure 3
<p>Test specimen and geometry of FEM model.</p>
Full article ">Figure 4
<p>Installation location of resistance strain gauge.</p>
Full article ">Figure 5
<p>Initial specimen and specimen in the test machine.</p>
Full article ">Figure 6
<p>A 12.5 mm lens Sony digital camera with 5~10 mm extension.</p>
Full article ">Figure 7
<p>Stress field (E model) of specimen viz. number of cycles. Stress units are in Pa.</p>
Full article ">Figure 8
<p>Cross-section after fracture.</p>
Full article ">Figure 9
<p>The geometry of the specimen and the mesh for finite element analysis.</p>
Full article ">Figure 10
<p>Maximum principal stress at both toe and blunt side of welded specimen, The stress units are in MPa.</p>
Full article ">Figure 11
<p>Comparison of measured and computed crack growth histories for welded specimen.</p>
Full article ">
24 pages, 9759 KiB  
Article
Experimental and Numerical Evaluation of Calcium-Silicate-Based Mineral Foam for Blast Mitigation
by Aldjabar Aminou, Mohamed Ben Rhouma, Bachir Belkassem, Hamza Ousji, Lincy Pyl and David Lecompte
Appl. Sci. 2024, 14(21), 9656; https://doi.org/10.3390/app14219656 - 22 Oct 2024
Viewed by 595
Abstract
Cellular materials such as aluminum and polyurethane foams are recognized for their effectiveness in energy absorption. They commonly serve as crushable cores in sacrificial cladding for blast mitigation purposes. This study delves into the effectiveness of autoclaved aerated concrete (AAC), a lightweight, porous [...] Read more.
Cellular materials such as aluminum and polyurethane foams are recognized for their effectiveness in energy absorption. They commonly serve as crushable cores in sacrificial cladding for blast mitigation purposes. This study delves into the effectiveness of autoclaved aerated concrete (AAC), a lightweight, porous material known for its energy-absorbing properties as a crushable core in sacrificial cladding. The experimental set-up features a rigid frame made of steel measuring 1000 × 1000 × 15 mm3 with a central square opening (300 × 300 mm2) holding a 2 mm thick aluminum plate representing the structure. The dynamic response of the aluminum plate is captured using two high-speed cameras arranged in a stereoscopic configuration. Three-dimensional digital image correlation is used to compute the transient deformation fields. Blast loading is achieved by detonating 20 g of C4 explosive set at 250 mm from the plate’s center. The study assesses the mineral foam’s absorption capacity by comparing out-of-plane displacement and mean permanent deformation of the aluminum plate with and without the protective solution. Six foam configurations (A to F) are tested experimentally and numerically, varying in the foam’s free space for expansion relative to its total volume. Results show positive protective effects, with configuration F reducing maximum deflection by at least 30% and configuration C by up to 70%. Foam configuration influences energy dissipation, with an optimal lateral surface-to-volume ratio (ζ) enhancing protective effects, although excessive ζ leads to non-uniform foam crushing. To address the influence of front skin deformability, a non-deformable front skin has been adopted. The latter demonstrates an increased effectiveness of the sacrificial cladding, particularly for ζ values above the optimal value obtained when using a deformable front skin. Notably, using a non-deformable front skin increases maximum deflection reduction and foam energy absorption by up to approximately 30%. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Overview of the set-up showing the steel frame, two high-speed cameras (Photron Fastcam SA5, Photron, Bucks, UK) arranged stereoscopically, and three LED spots; (<b>b</b>) speckle pattern applied on the aluminum backplate (EN AW-1050 A H24, Thyssenkrupp, Brussels, Belgium) enabling measuring the displacement and in-plane strain fields; and (<b>c</b>) 20 g of C4 explosive charge in a spherical shape and electric detonator.</p>
Full article ">Figure 2
<p>Tested samples (<b>a</b>) backplate without foam and (<b>b</b>) backplate with mineral foam of 60 mm thickness and front skin of 2 mm thickness.</p>
Full article ">Figure 3
<p>60 mm thick foam in single-volume (<b>a</b>) and spaced foam (<b>b</b>,<b>c</b>) configurations adopted for the experimental tests.</p>
Full article ">Figure 4
<p>(<b>a</b>) Thick aluminum plate equipped with four high-pressure transducers; and (<b>b</b>) Schematic of the geometry of aluminum plate specimen.</p>
Full article ">Figure 5
<p>Pressure–time histories for 20 g of C4 set at (<b>a</b>) 250 mm and (<b>b</b>) 188 mm.</p>
Full article ">Figure 6
<p>Pressure–time histories from Gauge 1 in three different tests to assess the reproducibility of 20 g of C4 set at (<b>a</b>) 250 mm and (<b>b</b>) 188 mm.</p>
Full article ">Figure 7
<p>Mean (<b>a</b>) peak pressures and (<b>b</b>) impulses along with their standard deviations with and without the sacrificial cladding.</p>
Full article ">Figure 8
<p>Response of the unprotected backplate (<b>a</b>) deflection profiles for test 2 and (<b>b</b>) mid-span deflections.</p>
Full article ">Figure 9
<p>Experimental deflection profiles for tests 1: (<b>a</b>) Without the protective foam, (<b>b</b>) in Configuration A, (<b>c</b>) in Configuration B, and (<b>d</b>) in Configuration C.</p>
Full article ">Figure 10
<p>Mitigation of the blast load through pulverization of the mineral foam.</p>
Full article ">Figure 11
<p>Post-mortem visual inspection of the fractured foam layer for configuration A.</p>
Full article ">Figure 12
<p>Mid-span deflection of the backplate with and without the protective foam. Tests 1 are shown with a solid line, Tests 2 with a dotted line and Tests 3 with dash-dot line.</p>
Full article ">Figure 13
<p>(<b>a</b>) Front view of the experimental set-up in finite element model and (<b>b</b>) One-quarter of the SC solution with the foam in SPH viewed from the rear of the set-up (front skin in green-foam-backplate in red).</p>
Full article ">Figure 14
<p>Convergence study of the mesh size on the aluminum plates and the mineral foam.</p>
Full article ">Figure 15
<p>Quasi-static and dynamic compression stress-strain curves that were inserted as a table for the Fu Chang Foam material model.</p>
Full article ">Figure 16
<p>Additional configurations adopted for numerical simulation: (<b>a</b>) Configuration D, (<b>b</b>) Configuration E, and (<b>c</b>) Configuration F.</p>
Full article ">Figure 17
<p>Comparison of numerical and experimental reflected over-pressure measured by Gauge 1. (<b>a</b>) Results without the protective foam and (<b>b</b>) results with 60 mm thick foam cladding.</p>
Full article ">Figure 18
<p>Comparison between the experimental and numerical deflection at the backplate’s center with the protective foam for configuration A.</p>
Full article ">Figure 19
<p>Experimental and numerical deflection profiles: (<b>a</b>) Without the protective foam, (<b>b</b>) Configuration A, (<b>c</b>) Configuration B, and (<b>d</b>) Configuration C.</p>
Full article ">Figure 20
<p>Bar chart illustrating the maximum deflection of the backplate’s center across various configurations and three charge masses.</p>
Full article ">Figure 21
<p>Bar chart illustrating the mean permanent deformation of the backplate’s center across various configurations and three charge masses.</p>
Full article ">Figure 22
<p>Maximum deflection at the backplate’s center as a function of ζ and for different explosive charge masses.</p>
Full article ">Figure 23
<p>Absorbed energy by the foam as a function of ζ and for different explosive charge masses.</p>
Full article ">Figure 24
<p>Isometric view of the out-of-plane displacements (in mm) of the backplate (top plate), foam and front skin of the different configurations.</p>
Full article ">Figure 25
<p>Deflection profiles at mid-plane of the backplate and the front skin.</p>
Full article ">Figure 26
<p>Foam level of crushing across various foam configurations and a charge mass of 50 g TNT equivalent.</p>
Full article ">Figure 27
<p>Evolution up to 0.6 ms in an isometric view of the out-of-plane displacements (in mm) of the backplate (top plate), foam and front skin of the different configurations for configurations C, D, and E.</p>
Full article ">Figure 28
<p>Influence of front skin deformability on the maximum deflection at the backplate’s center across various foam configurations.</p>
Full article ">Figure 29
<p>Influence of front skin deformability on the absorbed energy by the foam for different foam configurations.</p>
Full article ">
10 pages, 1712 KiB  
Article
A Novel Polarized Light Microscope for the Examination of Birefringent Crystals in Synovial Fluid
by John D. FitzGerald, Chesca Barrios, Tairan Liu, Ann Rosenthal, Geraldine M. McCarthy, Lillian Chen, Bijie Bai, Guangdong Ma and Aydogan Ozcan
Gout Urate Cryst. Depos. Dis. 2024, 2(4), 315-324; https://doi.org/10.3390/gucdd2040022 - 22 Oct 2024
Viewed by 798
Abstract
Background: The gold standard for crystal arthritis diagnosis relies on the identification of either monosodium urate (MSU) or calcium pyrophosphate (CPP) crystals in synovial fluid. With the goal of enhanced crystal detection, we adapted a standard compensated polarized light microscope (CPLM) with a [...] Read more.
Background: The gold standard for crystal arthritis diagnosis relies on the identification of either monosodium urate (MSU) or calcium pyrophosphate (CPP) crystals in synovial fluid. With the goal of enhanced crystal detection, we adapted a standard compensated polarized light microscope (CPLM) with a polarized digital camera and multi-focal depth imaging capabilities to create digital images from synovial fluid mounted on microscope slides. Using this single-shot computational polarized light microscopy (SCPLM) method, we compared rates of crystal detection and raters’ preference for image. Methods: Microscope slides from patients with either CPP, MSU, or no crystals in synovial fluid were acquired using CPLM and SCPLM methodologies. Detection rate, sensitivity, and specificity were evaluated by presenting expert crystal raters with (randomly sorted) CPLM and SCPLM digital images, from FOV above clinical samples. For each FOV and each method, each rater was asked to identify crystal suspects and their level of certainty for each crystal suspect and crystal type (MSU vs. CPP). Results: For the 283 crystal suspects evaluated, SCPLM resulted in higher crystal detection rates than did CPLM, for both CPP (51%. vs. 28%) and MSU (78% vs. 46%) crystals. Similarly, sensitivity was greater for SCPLM for CPP (0.63 vs. 0.35) and MSU (0.88 vs. 0.52) without giving up much specificity resulting in higher AUC. Conclusions: Subjective and objective measures of greater detection and higher certainty were observed for SCPLM over CPLM, particularly for CPP crystals. The digital data associated with these images can ultimately be incorporated into an automated crystal detection system that provides a quantitative report on crystal count, size, and morphology. Full article
Show Figures

Figure 1

Figure 1
<p>Single-shot computational polarized light microscopy (SCPLM) setup and schematic diagram. (<b>A</b>) Single-shot computational polarized light microscopy (SCPLM) setup. (<b>B</b>) Schematic diagram of the SCPLM setup.</p>
Full article ">Figure 2
<p><b>Top</b> row: Side-by-side CPLM (magenta) and SCPLM (grey) comparison images (CPPD patient). <b>Bottom</b> row: Side-by-side CPLM (magenta) and SCPLM (grey) comparison images (MSU patient).</p>
Full article ">Figure 3
<p>Crystal Identification Workflow and Final Crystal Specific Analytic Sample Selection Clinical Source. * 37/196 crystals with low certainty scores (1 or 2). <sup>†</sup> 10/87 crystals with low certainty scores. <sup>‡</sup> 10/10 crystals with low certainty scores and excluded from crystal-specific analyses. Legend: CPPD = calcium pyrophosphate deposition. MSU = monosodium urate. FOV = field of view. C/S = crystal suspect. UNK = unknown. Neg Ctrl = negative control.</p>
Full article ">
Back to TopTop