Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (463)

Search Parameters:
Keywords = feature reuse

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 35674 KiB  
Article
Discussion Points of the Remote Sensing Study and Integrated Analysis of the Archaeological Landscape of Rujm el-Hiri
by Olga Khabarova, Michal Birkenfeld and Lev V. Eppelbaum
Remote Sens. 2024, 16(22), 4239; https://doi.org/10.3390/rs16224239 - 14 Nov 2024
Viewed by 282
Abstract
Remote sensing techniques provide crucial insights into ancient settlement patterns in various regions by uncovering previously unknown archaeological sites and clarifying the topological features of known ones. Meanwhile, in the northern part of the Southern Levant, megalithic structures remain largely underexplored with these [...] Read more.
Remote sensing techniques provide crucial insights into ancient settlement patterns in various regions by uncovering previously unknown archaeological sites and clarifying the topological features of known ones. Meanwhile, in the northern part of the Southern Levant, megalithic structures remain largely underexplored with these methods. This study addresses this gap by analyzing the landscape around Rujm el-Hiri, one of the most prominent Southern Levantine megaliths dated to the Chalcolithic/Early Bronze Age, for the first time. We discuss the type and extent of the archaeological remains identified in satellite images within a broader context, focusing on the relationships between landscapes and these objects and the implications of their possible function. Our analysis of multi-year satellite imagery covering the 30 km region surrounding the Sea of Galilee reveals several distinct patterns: 40–90-m-wide circles and thick walls primarily constructed along streams, possibly as old as Rujm el-Hiri itself; later-period linear thin walls forming vast rectangular fields and flower-like clusters of ~ 20 m diameter round-shaped fences found in wet areas; tumuli, topologically linked to the linear walls and flower-like fences. Although tumuli share similar forms and likely construction techniques, their spatial distribution, connections to other archaeological features, and the statistical distribution in their sizes suggest that they might serve diverse functions. The objects and patterns identified may be used for further training neural networks to analyze their spatial properties and interrelationships. Most archaeological structures in the region were reused long after their original construction. This involved adding new features, building walls over older ones, and reshaping the landscape with new objects. Rujm el-Hiri is a prime example of such a complex sequence. Geomagnetic analysis shows that since the entire region has rotated over time, the Rujm el-Hiri’s location shifted from its original position for tens of meters for the thousands of years of the object’s existence, challenging theories of the alignment of its walls with astronomical bodies and raising questions regarding its possible identification as an observatory. Full article
(This article belongs to the Section Remote Sensing and Geo-Spatial Science)
Show Figures

Figure 1

Figure 1
<p>Rujm el-Hiri. (<b>a</b>) Geographic location, (32°54′30.87″N, 35°48′3.89″E); (<b>b</b>) Aerial view, adapted from [<a href="#B21-remotesensing-16-04239" class="html-bibr">21</a>]; (<b>c</b>) Distance-height profile of the surrounding area relative to the northernmost point of the Sea of Galilee (vertical axis—altitude below/above sea level, in m; horizontal axis—the distance in km). The vertical line indicates the location of Rujm el-Hiri.</p>
Full article ">Figure 2
<p>Results of the combined geophysical analysis in the area under study. (<b>A</b>): combined paleomagnetic–magnetic–radiometric scheme of the Sea of Galilee (modified and supplemented after [<a href="#B72-remotesensing-16-04239" class="html-bibr">72</a>]). (1) outcropped Cenozoic basalts, (2) points with the radiometric age of basalts (in m.y.), (3) wells, (4) faults, (5) general direction of the discovered buried basaltic plate dipping in the southern part of the Sea of Galilee, (6) counter clockwise (a) and clockwise (b) rotation of faults and tectonic blocks, (7) pull-apart basin of the Sea of Galilee, (8) suggested boundaries of the paleomagnetic zones in the sea, data of land paleomagnetic measurements: (9 and 10) (9) reverse magnetization, (10) normal magnetization, (11 and 12) results of magnetic anomalies analysis: (11) normal magnetization, (12) reverse magnetization, (13) reversely magnetized basalts, (14) normal magnetized basalts, (15) Miocene basalts and sediments with the complex paleomagnetic characteristics, (16) Pliocene–Pleistocene basalts and sediments with complex paleomagnetic characteristics, (17) index of paleomagnetic zonation. (<b>B</b>): The generalized results of the paleomagnetic–geodynamic studies in northern Israel (after [<a href="#B71-remotesensing-16-04239" class="html-bibr">71</a>,<a href="#B72-remotesensing-16-04239" class="html-bibr">72</a>]) overlaid on the geological map of Israel (map after [<a href="#B97-remotesensing-16-04239" class="html-bibr">97</a>]; geological captions are omitted for simplicity).</p>
Full article ">Figure 3
<p>Rujm el-Hiri site, as seen from space in different years and seasons. High-resolution images from Pleiades satellites processed by CNES/Airbus are provided by Google Earth Pro. Eye altitude is 460 m, tilt—zero.</p>
Full article ">Figure 4
<p>Landscape around the Rujm el-Hiri site, large-scale view. <b>Upper</b> panel: general view of the Rujm el-Hiri area with distinct types of archaeological objects indicated by arrows. <b>Bottom</b> panels: examples of the key types of archaeological objects identified in satellite images. Here and below, the north direction is as shown in <a href="#remotesensing-16-04239-f003" class="html-fig">Figure 3</a>.</p>
Full article ">Figure 5
<p>Linear-shaped walls and rectangular fields, and livestock enclosures beneath the Revaya reservoir. (<b>a</b>) General view of the reservoir during the full water period in 2018. (<b>b</b>) Bottom of the lake during the low water period in 2021. (<b>c</b>) Close-up of (<b>b</b>) indicated by a green rectangle. (<b>d</b>–<b>f</b>) Close-up of (<b>b</b>) indicated by the turquoise rectangle and two objects related to the human exploitation of the area surrounding the former small lake before the reservoirs’ dike was constructed. Here and below, the location is given with coordinates in white corresponding to the center of the site under study.</p>
Full article ">Figure 6
<p>Walls, rectangular livestock enclosures, and old wide walls built along the former stream near Rujm el-Hiri.</p>
Full article ">Figure 7
<p>Examples of round-shaped walls or fences forming flower-like clusters of ~100 m diameter. (<b>a</b>) Well-preserved site on the bottom of the Dvash reservoir; (<b>b</b>) Flower-like cluster of fences found along the Wadi Hafina stream; (<b>c</b>) Flower-like structures near the Revaya reservoir; (<b>d</b>) Analogous structures located 4 km to the south of Rujm el-Hiri; (<b>e</b>) Flower-like structures on the hill by the Nachal Akbara stream 28 km to the north-west of Rujm el-Hiri; (<b>f</b>) Merging clusters connected by walls 12 km to the north of Rujm el-Hiri. Archaeological objects of this type are found in the nearest vicinity of water sources.</p>
Full article ">Figure 8
<p>Examples of more complex round-shaped fences forming flower-like clusters. (<b>a</b>) flower-like conglomerate of fences located 6.5 km southwest of Rujm el-Hiri; (<b>b</b>) analogous cluster located 14.3 km north of Rujm el-Hiri featuring rectangular structures around the center; (<b>c</b>) two clusters with tumuli in the center linked by the wall, located one kilometer north of Rujm el-Hiri.</p>
Full article ">Figure 9
<p>Examples of round-shaped large structures of different types. (<b>a</b>,<b>b</b>)—objects with double walls, probably built in the same period as Rujm el-Hiri. (<b>c</b>,<b>d</b>)—singular-wall objects of the later period filled with linear structures. There are remains of the buildings or tumuli in the circular structure shown in (<b>d</b>).</p>
Full article ">Figure 10
<p>Examples of round-shaped ~60–90 m-wide structures, with the entrance facing southeast and signatures of active secondary use. (<b>a</b>) round-shaped structure situated 3 km northeast of Rujm el-Hiri; (<b>b</b>) round-shaped structure located 13.5 km north of Rujm el-Hiri; (<b>c</b>) analogous object located 13.5 km northwest of Rujm el-Hiri.</p>
Full article ">Figure 11
<p>Tumuli observed in different landscapes. (<b>a</b>) Agglomerate of tumuli along the Dalyiot stream 500 m north of Rujm el-Hiri. The distance between the tumuli is small, ~3–10 m. Most tumuli are linked by walls, and some of them are surrounded by fences; (<b>b</b>) Several tumuli among rectangular walls located 0.7 km southwest of Rujm el-Hiri. The distance between the tumuli is tens of meters; (<b>c</b>) Agglomerate of poorly-preserved tumuli on the hill 28 km east of Rujm el-Hiri. The tumuli are located close to each other, similar to (<b>a</b>), inside rectangular walls.</p>
Full article ">Figure 12
<p>Distribution of tumuli sizes observed in different landscapes. The black color shows all tumuli in three selected areas (the tumuli field shown in <a href="#remotesensing-16-04239-f011" class="html-fig">Figure 11</a>a, the tumuli field located to the northwest from the Revaya reservoir, the Revaya reservoir tumuli, and the tumuli field to the southwest from Rujm el-Hiri). A total of 304 tumuli. The white color indicates tumuli on the bottom of the Revaya reservoir, shown in <a href="#remotesensing-16-04239-f005" class="html-fig">Figure 5</a>.</p>
Full article ">Figure 13
<p>Combined types of archaeological objects belonging to different epochs. (<b>a</b>) The site, located 3 km northwest of Rujm el-Hiri, features interlinked objects such as tumuli, round-shaped structures, and walls. Modern activities damage the site. (<b>b</b>) Unfinished or damaged Rujm el-Hiri-type object with thick walls, located 1.7 km south of Rujm el-Hiri. The internal space is filled with flower-like circular walls of the later period. (<b>c</b>) Rujm el-Hiri-size circular object situated 13 km north of Rujm el-Hiri. The site was intensively reused.</p>
Full article ">Figure 14
<p>Walls of different periods in the archaeological landscape. (<b>a</b>) An example of the later period walls built upon older-period walls; (<b>b</b>) Walls of different periods as seen in Rujm el-Hiri, aerial view.</p>
Full article ">
20 pages, 7344 KiB  
Article
Research on a Joint Extraction Method of Track Circuit Entities and Relations Integrating Global Pointer and Tensor Learning
by Yanrui Chen, Guangwu Chen and Peng Li
Sensors 2024, 24(22), 7128; https://doi.org/10.3390/s24227128 - 6 Nov 2024
Viewed by 337
Abstract
To address the issue of efficiently reusing the massive amount of unstructured knowledge generated during the handling of track circuit equipment faults and to automate the construction of knowledge graphs in the railway maintenance domain, it is crucial to leverage knowledge extraction techniques [...] Read more.
To address the issue of efficiently reusing the massive amount of unstructured knowledge generated during the handling of track circuit equipment faults and to automate the construction of knowledge graphs in the railway maintenance domain, it is crucial to leverage knowledge extraction techniques to efficiently extract relational triplets from fault maintenance text data. Given the current lag in joint extraction technology within the railway domain and the inefficiency in resource utilization, this paper proposes a joint extraction model for track circuit entities and relations, integrating Global Pointer and tensor learning. Taking into account the associative characteristics of semantic relations, the nesting of domain-specific terms in the railway sector, and semantic diversity, this research views the relation extraction task as a tensor learning process and the entity recognition task as a span-based Global Pointer search process. First, a multi-layer dilate gated convolutional neural network with residual connections is used to extract key features and fuse the weighted information from the 12 different semantic layers of the RoBERTa-wwm-ext model, fully exploiting the performance of each encoding layer. Next, the Tucker decomposition method is utilized to capture the semantic correlations between relations, and an Efficient Global Pointer is employed to globally predict the start and end positions of subject and object entities, incorporating relative position information through rotary position embedding (RoPE). Finally, comparative experiments with existing mainstream joint extraction models were conducted, and the proposed model’s excellent performance was validated on the English public datasets NYT and WebNLG, the Chinese public dataset DuIE, and a private track circuit dataset. The F1 scores on the NYT, WebNLG, and DuIE public datasets reached 92.1%, 92.7%, and 78.2%, respectively. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>Example of overlapping relations.</p>
Full article ">Figure 2
<p>The structure of the joint extraction model for track circuit entities and relations integrates Global Pointer and tensor learning.</p>
Full article ">Figure 3
<p>The structure of a multi-layer dilate gated convolutional neural network.</p>
Full article ">Figure 4
<p>Example of how to construct a three-dimension word relation tensor from word tables.</p>
Full article ">Figure 5
<p>Knowledge association structure diagram.</p>
Full article ">Figure 6
<p>The results of different methods on the track circuit validation set.</p>
Full article ">Figure 7
<p>The experimental results using different upstream models on the track circuit test set.</p>
Full article ">Figure 8
<p>The triplet extraction performance under different dimensions of the core tensor <math display="inline"><semantics> <mi mathvariant="script">G</mi> </semantics></math>.</p>
Full article ">Figure 9
<p>The parameter-tuning experiment for <math display="inline"><semantics> <mi>α</mi> </semantics></math>.</p>
Full article ">Figure 10
<p>The parameter-tuning experiment for <math display="inline"><semantics> <mi>γ</mi> </semantics></math>.</p>
Full article ">Figure 11
<p>The model extracts entity types from case sentences.</p>
Full article ">Figure 12
<p>The model extracts relation types from case sentences.</p>
Full article ">
24 pages, 6467 KiB  
Article
YOLO-DHGC: Small Object Detection Using Two-Stream Structure with Dense Connections
by Lihua Chen, Lumei Su, Weihao Chen, Yuhan Chen, Haojie Chen and Tianyou Li
Sensors 2024, 24(21), 6902; https://doi.org/10.3390/s24216902 - 28 Oct 2024
Viewed by 512
Abstract
Small object detection, which is frequently applied in defect detection, medical imaging, and security surveillance, often suffers from low accuracy due to limited feature information and blurred details. This paper proposes a small object detection method named YOLO-DHGC, which employs a two-stream structure [...] Read more.
Small object detection, which is frequently applied in defect detection, medical imaging, and security surveillance, often suffers from low accuracy due to limited feature information and blurred details. This paper proposes a small object detection method named YOLO-DHGC, which employs a two-stream structure with dense connections. Firstly, a novel backbone network, DenseHRNet, is introduced. It innovatively combines a dense connection mechanism with high-resolution feature map branches, effectively enhancing feature reuse and cross-layer fusion, thereby obtaining high-level semantic information from the image. Secondly, a two-stream structure based on an edge-gated branch is designed. It uses higher-level information from the regular detection stream to eliminate irrelevant interference remaining in the early processing stages of the edge-gated stream, allowing it to focus on processing information related to shape boundaries and accurately capture the morphological features of small objects. To assess the effectiveness of the proposed YOLO-DHGC method, we conducted experiments on several public datasets and a self-constructed dataset. Exceptionally, a defect detection accuracy of 96.3% was achieved on the Market-PCB public dataset, demonstrating the effectiveness of our method in detecting small object defects for industrial applications. Full article
(This article belongs to the Special Issue Image Processing and Analysis for Object Detection: 2nd Edition)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Framework of the YOLO-DHGC algorithm.</p>
Full article ">Figure 2
<p>Single-resolution DenseHRNet connection. Different colored arrows represent the flow of information from different layers.</p>
Full article ">Figure 3
<p>The architecture of DenseHRNet backbone network. The black arrows represent information directly passed from the previous layer, while the other colored arrows indicate information passed across different layers. These colored arrows illustrate the dense connection mechanism, where each layer is directly connected to all subsequent layers, enhancing feature reuse and information flow.</p>
Full article ">Figure 4
<p>Fusion process for different resolution feature maps.</p>
Full article ">Figure 5
<p>Structure of the edge-gated stream.</p>
Full article ">Figure 6
<p>Comparison of mAP@0.5 and loss change curves during training. (<b>a</b>) represents a comparison in mAP@0.5; (<b>b</b>) represents a comparison in bounding box regression loss; (<b>c</b>) represents a comparison of confidence loss; (<b>d</b>) represents a comparison in classification loss.</p>
Full article ">Figure 7
<p>YOLO-DHGC visualization results for Short and Spurious Copper defects.</p>
Full article ">Figure 8
<p>Visualization of detection results of YOLO-DHGC on PKU-Market-PCB dataset.</p>
Full article ">Figure 9
<p>Comparison of visual results of YOLO-DHGC detection on NEU-DET dataset.</p>
Full article ">Figure 10
<p>Comparison of algorithm detection results on TinyPerson dataset.</p>
Full article ">Figure 11
<p>Localized enlargement of detection results of various defects of backlight boards.</p>
Full article ">
7 pages, 996 KiB  
Communication
Pd EnCat™ 30 Recycling in Suzuki Cross-Coupling Reactions
by Laura D’Andrea and Casper Steinmann
Organics 2024, 5(4), 443-449; https://doi.org/10.3390/org5040023 - 22 Oct 2024
Viewed by 686
Abstract
Pd EnCat™ 30 is a palladium catalyst broadly used in several hydrogenation and cross-coupling reactions. It is known for its numerous beneficial features, which include high-yielding performance, easy recovery, and reusability. However, the available data regarding its recyclability in Suzuki coupling reactions are [...] Read more.
Pd EnCat™ 30 is a palladium catalyst broadly used in several hydrogenation and cross-coupling reactions. It is known for its numerous beneficial features, which include high-yielding performance, easy recovery, and reusability. However, the available data regarding its recyclability in Suzuki coupling reactions are limited to a few reaction cycles and, therefore, fail to explore its full potential. Our work focuses on investigating the extent of Pd EnCat™ 30 reusability in Suzuki cross-coupling reactions by measuring its performance according to isolated yields of product. Our findings demonstrate that Pd EnCat™ 30 can be reused over a minimum of 30 reaction cycles, which is advantageous in terms of cost reduction and more sustainable chemical production. Full article
Show Figures

Figure 1

Figure 1
<p>Isolated yields of 2-methoxy-4′-nitrobiphenyl measured across 10 experiments conducted in triplicate.</p>
Full article ">Figure 2
<p>Isolated yields of 2-methoxy-4′-nitrobiphenyl measured across 30 experiments. The overall average decrease in catalytic activity (light green line) amounts to circa 2.64%.</p>
Full article ">Figure 3
<p>X-ray diffraction patterns of Pd EnCat™ 30 prior to use (green) and after the last reaction cycle (purple). They show its amorphous nature and signals from its main components: the polyurea matrix and two differently sized palladium nanograins.</p>
Full article ">Scheme 1
<p>Reaction scheme of the Suzuki cross-coupling reaction used for investigating Pd EnCat™ 30 recyclability. TBAA: tetrabutylammonium acetate.</p>
Full article ">
29 pages, 4128 KiB  
Article
A Context-Based Perspective on Frost Analysis in Reuse-Oriented Big Data-System Developments
by Agustina Buccella, Alejandra Cechich, Federico Saurin, Ayelén Montenegro, Andrea Rodríguez and Angel Muñoz
Information 2024, 15(11), 661; https://doi.org/10.3390/info15110661 - 22 Oct 2024
Viewed by 578
Abstract
The large amount of available data, generated every second via sensors, social networks, organizations, and so on, has generated new lines of research that involve novel methods, techniques, resources, and/or technologies. The development of big data systems (BDSs) can be approached from different [...] Read more.
The large amount of available data, generated every second via sensors, social networks, organizations, and so on, has generated new lines of research that involve novel methods, techniques, resources, and/or technologies. The development of big data systems (BDSs) can be approached from different perspectives, all of them useful, depending on the objectives pursued. In particular, in this work, we address BDSs in the area of software engineering, contributing to the generation of novel methodologies and techniques for software reuse. In this article, we propose a methodology to develop reusable BDSs by mirroring activities from software product line engineering. This means that the process of building BDSs is approached by analyzing the variety of domain features and modeling them as a family of related assets. The contextual perspective of the proposal, along with its supporting tool, is introduced through a case study in the agrometeorology domain. The characterization of variables for frost analysis exemplifies the importance of identifying variety, as well as the possibility of reusing previous analyses adjusted to the profile of each case. In addition to showing interesting findings from the case, we also exemplify our concept of context variety, which is a core element in modeling reusable BDSs. Full article
(This article belongs to the Special Issue Feature Papers in Information in 2024–2025)
Show Figures

Figure 1

Figure 1
<p>Reuse-oriented big data methodology.</p>
Full article ">Figure 2
<p>Datasheet for recording content variety: an example.</p>
Full article ">Figure 3
<p>Top-downapproach to the development of domain and reusable cases in the agrometeorology domain.</p>
Full article ">Figure 4
<p>P-1 process of CoVaMaT documenting the <span class="html-italic">agrometeorology-domain asset</span>, together with the <span class="html-italic">context variety</span> for the objective.</p>
Full article ">Figure 5
<p>P-1 process of CoVaMaT documenting the <span class="html-italic">source variety</span> associated with the <span class="html-italic">agrometeorology-domain asset</span>.</p>
Full article ">Figure 6
<p>P-1 process of CoVaMaT documenting the <span class="html-italic">content</span> and <span class="html-italic">process variety</span> assets.</p>
Full article ">Figure 7
<p>Correlation analysis for the <span class="html-italic">Original Dataset Villa Regina</span>.</p>
Full article ">Figure 8
<p>Scatter graphs analyzing rain rate and wind speed with respect to frosts.</p>
Full article ">Figure 9
<p>Correlation analysis of <span class="html-italic">Dynamic Dataset Villa Regina(n)</span> from <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> to <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>138</mn> </mrow> </semantics></math> according to Pearson (blue) and Spearman (orange).</p>
Full article ">Figure 10
<p>Relationship between <span class="html-italic">WindSpeed</span> and <span class="html-italic">FrostCont</span> variables, considering the <span class="html-italic">Dynamic Dataset Villa Regina(n)</span> from <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> to <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>138</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 11
<p>Relationship between <span class="html-italic">RainRate</span> and <span class="html-italic">FrostCont</span> variables, considering the <span class="html-italic">Dynamic Dataset Villa Regina(n)</span> from <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> to <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>138</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 12
<p>Adding more <span class="html-italic">process-variety assets</span> with variants for the processing type, analysis and, visualization techniques applied.</p>
Full article ">Figure 13
<p><span class="html-italic">Domain case VR</span> created for the instantiation of the <span class="html-italic">agrometeorology-domain asset</span>, together with associated files (during the application engineering phase).</p>
Full article ">Figure 14
<p>P-4 process retrieving stored assets: (1) query assets, (2) search for <span class="html-italic">domain-variety assets</span>, and (3) search for <span class="html-italic">domain-case assets</span>.</p>
Full article ">Figure 15
<p>P-3 process for documenting <span class="html-italic">Reusable Case Guerrico</span> from the <span class="html-italic">agrometeorology-domain assets</span>.</p>
Full article ">Figure 16
<p>Relationship between <span class="html-italic">WindSpeed</span> and <span class="html-italic">FrostCont</span> variables considering <span class="html-italic">Dynamic Dataset Villa Regina(n)</span> from <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> to <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>138</mn> </mrow> </semantics></math> in Guerrico.</p>
Full article ">Figure 17
<p><span class="html-italic">Percentage change analysis</span> between Villa Regina and Guerrico for Hypothesis 1 of the 13 variables wrt low temperature.</p>
Full article ">Figure 18
<p><span class="html-italic">Percentage change analysis</span> between Villa Regina and Guerrico analyzing a 24 h period.</p>
Full article ">Figure 19
<p><span class="html-italic">Percentage change analysis</span> for (<b>a</b>) wind speed, (<b>b</b>) wind run, and (<b>c</b>) rain rate variables in a 24 h period for Villa Regina and Guerrico.</p>
Full article ">Figure 20
<p>Simplified view of Gantt charts for the four cases evaluated.</p>
Full article ">Figure 21
<p>Differences in the percentages of time required for TCaseVR compared to the other three cases.</p>
Full article ">
23 pages, 7971 KiB  
Article
Three-Dimensional Outdoor Object Detection in Quadrupedal Robots for Surveillance Navigations
by Muhammad Hassan Tanveer, Zainab Fatima, Hira Mariam, Tanazzah Rehman and Razvan Cristian Voicu
Actuators 2024, 13(10), 422; https://doi.org/10.3390/act13100422 - 16 Oct 2024
Viewed by 781
Abstract
Quadrupedal robots are confronted with the intricate challenge of navigating dynamic environments fraught with diverse and unpredictable scenarios. Effectively identifying and responding to obstacles is paramount for ensuring safe and reliable navigation. This paper introduces a pioneering method for 3D object detection, termed [...] Read more.
Quadrupedal robots are confronted with the intricate challenge of navigating dynamic environments fraught with diverse and unpredictable scenarios. Effectively identifying and responding to obstacles is paramount for ensuring safe and reliable navigation. This paper introduces a pioneering method for 3D object detection, termed viewpoint feature histograms, which leverages the established paradigm of 2D detection in projection. By translating 2D bounding boxes into 3D object proposals, this approach not only enables the reuse of existing 2D detectors but also significantly increases the performance with less computation required, allowing for real-time detection. Our method is versatile, targeting both bird’s eye view objects (e.g., cars) and frontal view objects (e.g., pedestrians), accommodating various types of 2D object detectors. We showcase the efficacy of our approach through the integration of YOLO3D, utilizing LiDAR point clouds on the KITTI dataset, to achieve real-time efficiency aligned with the demands of autonomous vehicle navigation. Our model selection process, tailored to the specific needs of quadrupedal robots, emphasizes considerations such as model complexity, inference speed, and customization flexibility, achieving an accuracy of up to 99.93%. This research represents a significant advancement in enabling quadrupedal robots to navigate complex and dynamic environments with heightened precision and safety. Full article
(This article belongs to the Section Actuators for Robotics)
Show Figures

Figure 1

Figure 1
<p>Workflow diagram for a robot highlighting the main processes required.</p>
Full article ">Figure 2
<p>Sub-process of object detection. Workflow from data capture to control.</p>
Full article ">Figure 3
<p>Related papers for object detection on the KITTI dataset [<a href="#B8-actuators-13-00422" class="html-bibr">8</a>,<a href="#B16-actuators-13-00422" class="html-bibr">16</a>,<a href="#B20-actuators-13-00422" class="html-bibr">20</a>,<a href="#B37-actuators-13-00422" class="html-bibr">37</a>,<a href="#B39-actuators-13-00422" class="html-bibr">39</a>,<a href="#B40-actuators-13-00422" class="html-bibr">40</a>,<a href="#B42-actuators-13-00422" class="html-bibr">42</a>,<a href="#B43-actuators-13-00422" class="html-bibr">43</a>,<a href="#B44-actuators-13-00422" class="html-bibr">44</a>,<a href="#B45-actuators-13-00422" class="html-bibr">45</a>,<a href="#B46-actuators-13-00422" class="html-bibr">46</a>,<a href="#B47-actuators-13-00422" class="html-bibr">47</a>,<a href="#B48-actuators-13-00422" class="html-bibr">48</a>,<a href="#B49-actuators-13-00422" class="html-bibr">49</a>,<a href="#B50-actuators-13-00422" class="html-bibr">50</a>,<a href="#B51-actuators-13-00422" class="html-bibr">51</a>,<a href="#B52-actuators-13-00422" class="html-bibr">52</a>,<a href="#B53-actuators-13-00422" class="html-bibr">53</a>,<a href="#B54-actuators-13-00422" class="html-bibr">54</a>,<a href="#B55-actuators-13-00422" class="html-bibr">55</a>,<a href="#B56-actuators-13-00422" class="html-bibr">56</a>,<a href="#B57-actuators-13-00422" class="html-bibr">57</a>,<a href="#B58-actuators-13-00422" class="html-bibr">58</a>,<a href="#B59-actuators-13-00422" class="html-bibr">59</a>,<a href="#B60-actuators-13-00422" class="html-bibr">60</a>,<a href="#B61-actuators-13-00422" class="html-bibr">61</a>,<a href="#B62-actuators-13-00422" class="html-bibr">62</a>,<a href="#B63-actuators-13-00422" class="html-bibr">63</a>,<a href="#B64-actuators-13-00422" class="html-bibr">64</a>,<a href="#B65-actuators-13-00422" class="html-bibr">65</a>,<a href="#B66-actuators-13-00422" class="html-bibr">66</a>,<a href="#B67-actuators-13-00422" class="html-bibr">67</a>,<a href="#B68-actuators-13-00422" class="html-bibr">68</a>,<a href="#B69-actuators-13-00422" class="html-bibr">69</a>,<a href="#B70-actuators-13-00422" class="html-bibr">70</a>,<a href="#B71-actuators-13-00422" class="html-bibr">71</a>,<a href="#B72-actuators-13-00422" class="html-bibr">72</a>,<a href="#B73-actuators-13-00422" class="html-bibr">73</a>,<a href="#B74-actuators-13-00422" class="html-bibr">74</a>,<a href="#B75-actuators-13-00422" class="html-bibr">75</a>,<a href="#B76-actuators-13-00422" class="html-bibr">76</a>,<a href="#B76-actuators-13-00422" class="html-bibr">76</a>].</p>
Full article ">Figure 4
<p>Architecture of the model.</p>
Full article ">Figure 5
<p>An object detection model represented architecturally in one step. In one run over the network, the model trains on the class probabilities and BBox regression, as opposed to the two passes needed by the two-stage model [<a href="#B22-actuators-13-00422" class="html-bibr">22</a>].</p>
Full article ">Figure 6
<p>The image is divided into an S × S grid by the YOLO model. Each grid cell’s confidence score, class probabilities, and BBoxes are all predicted by the model [<a href="#B22-actuators-13-00422" class="html-bibr">22</a>].</p>
Full article ">Figure 7
<p>Deep design architecture of the model.</p>
Full article ">Figure 8
<p>Bounding boxes for viewpoint: red shows the rear and blue shows the front of the object.</p>
Full article ">Figure 9
<p>Quadrupedal robot maneuvering in indoor and outdoor environments equipped with a Realsense camera.</p>
Full article ">Figure 10
<p>Image of KITTI dataset.</p>
Full article ">Figure 11
<p>Velodyne data in KITTI dataset.</p>
Full article ">Figure 12
<p>Enhanced Velodyne: for visualization purposes only.</p>
Full article ">Figure 13
<p>Confusion matrix of 2D detection.</p>
Full article ">Figure 14
<p>F1 score curve for 2D detection.</p>
Full article ">Figure 15
<p>Calculated Average Precision (AP).</p>
Full article ">Figure 16
<p>Detections achieved by the YOLO3D model.</p>
Full article ">Figure 17
<p>Accuracy achieved by the YOLO3D model.</p>
Full article ">Figure 18
<p>Average recall of the YOLO3D model.</p>
Full article ">Figure 19
<p>Loss calculated for the YOLO3D model.</p>
Full article ">Figure 20
<p>Detection achieved for a high-contrast image.</p>
Full article ">Figure 21
<p>Detection achieved for a blurred image.</p>
Full article ">Figure 22
<p>Detection achieved for a jittery image.</p>
Full article ">
47 pages, 10840 KiB  
Article
Smart Product-Service System for Parking Furniture—Sale of Storage Space in Parking Places
by Mariusz Salwin and Tomasz Chmielewski
Sustainability 2024, 16(20), 8824; https://doi.org/10.3390/su16208824 - 11 Oct 2024
Viewed by 875
Abstract
Growing competition, changing customer needs, and growing environmental protection requirements mean companies are forced to change their approach to business. Traditional product sales are being replaced by systemic solutions focused on meeting specific customer requirements while reducing negative impacts on the environment. One [...] Read more.
Growing competition, changing customer needs, and growing environmental protection requirements mean companies are forced to change their approach to business. Traditional product sales are being replaced by systemic solutions focused on meeting specific customer requirements while reducing negative impacts on the environment. One such solution is the Product-Service System (PSS). This allows manufacturers to offer their products’ functionalities and features through related services. By extending the life of products, promoting the reuse of materials, and reducing the amount of waste, the implementation of PSS strongly supports sustainable development. The paper focuses on a new product group—garage boxes (GB). It discusses a new PSS business model that responds to the needs of people living in blocks of flats with no tenant storage lockers or rooms in the basement. The new business model sells the function (storing various possessions) and eliminates problems faced by tenants due to the lack of sufficient storage space. It provides customers with high-quality GB for as long as they need them. Customers can pick and choose equipment with additional services depending on their needs. The idea of the model is the outcome of a nationwide study carried out in Poland on a group of 500 residents of blocks of flats and consultations with manufacturers, homeowner associations, wholesale and retail traders, and the financial sector. The study provided us with information and data that provided a comprehensive picture of the problem of the absence of storage lockers or rooms for residents and the needs connected with GB. The results of the conducted research indicate that the developed business model responds to the diverse requirements of residents and supports sustainable solutions. It is an alternative to the lack of a storage unit assigned to each apartment. The business model developed in the paper is highly innovative and comprehensive. This makes it an attractive solution for residents of apartment blocks, and its implementation can significantly reduce the environmental impact. Full article
Show Figures

Figure 1

Figure 1
<p>PSS design methods—classification by sectors.</p>
Full article ">Figure 2
<p>Photos of a garage box installed in a manufactured parking space.</p>
Full article ">Figure 3
<p>Parts of a garage box that need to be replaced during modernisation.</p>
Full article ">Figure 4
<p>Additional services linked with a garage box [AS] that are of interest to customers.</p>
Full article ">Figure 5
<p>Additional components of garage box equipment [AE] which are interesting to clients.</p>
Full article ">Figure 6
<p>PSS model for garage boxes.</p>
Full article ">Figure 7
<p>Canvas Business Model for a new solution.</p>
Full article ">Figure 8
<p>PSS for garage boxes—flows.</p>
Full article ">Figure 9
<p>System operation flow.</p>
Full article ">Figure 10
<p>Manufacturer perspective: annual financial flows for traditional sales and a Product-Service System—option for a 20-year garage box life cycle.</p>
Full article ">Figure 11
<p>Customer perspective: annual fees.</p>
Full article ">Figure 12
<p>Manufacturer perspective: Net Present Value for traditional sales and a Product-Service System—option for a 20-year garage box life cycle.</p>
Full article ">Figure 13
<p>User perspective: Net Present Value for traditional sales and a Product-Service System—option for a 20-year garage box life cycle.</p>
Full article ">Figure 14
<p>Manufacturer perspective—Internal Rate of Return (IRR) that compares traditional sales and a Product-Service System—an option for 20-year lifetime of a garage box.</p>
Full article ">Figure 15
<p>Manufacturer perspective: Modified Internal Rate of Return (MIRR) for traditional sales and a Product-Service System—option for a 20-year garage box life cycle.</p>
Full article ">
21 pages, 3267 KiB  
Article
Attention-Guided Sample-Based Feature Enhancement Network for Crowded Pedestrian Detection Using Vision Sensors
by Shuyuan Tang, Yiqing Zhou, Jintao Li, Chang Liu and Jinglin Shi
Sensors 2024, 24(19), 6350; https://doi.org/10.3390/s24196350 - 30 Sep 2024
Viewed by 520
Abstract
Occlusion presents a major obstacle in the development of pedestrian detection technologies utilizing computer vision. This challenge includes both inter-class occlusion caused by environmental objects obscuring pedestrians, and intra-class occlusion resulting from interactions between pedestrians. In complex and variable urban settings, these compounded [...] Read more.
Occlusion presents a major obstacle in the development of pedestrian detection technologies utilizing computer vision. This challenge includes both inter-class occlusion caused by environmental objects obscuring pedestrians, and intra-class occlusion resulting from interactions between pedestrians. In complex and variable urban settings, these compounded occlusion patterns critically limit the efficacy of both one-stage and two-stage pedestrian detectors, leading to suboptimal detection performance. To address this, we introduce a novel architecture termed the Attention-Guided Feature Enhancement Network (AGFEN), designed within the deep convolutional neural network framework. AGFEN improves the semantic information of high-level features by mapping it onto low-level feature details through sampling, creating an effect comparable to mask modulation. This technique enhances both channel-level and spatial-level features concurrently without incurring additional annotation costs. Furthermore, we transition from a traditional one-to-one correspondence between proposals and predictions to a one-to-multiple paradigm, facilitating non-maximum suppression using the prediction set as the fundamental unit. Additionally, we integrate these methodologies by aggregating local features between regions of interest (RoI) through the reuse of classification weights, effectively mitigating false positives. Our experimental evaluations on three widely used datasets demonstrate that AGFEN achieves a 2.38% improvement over the baseline detector on the CrowdHuman dataset, underscoring its effectiveness and potential for advancing pedestrian detection technologies. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Pedestrian detection results in a crowded scenario. The green boxes indicate correct predictions, and the red boxes indicate missed predictions. (<b>a</b>) Baseline results; (<b>b</b>) our results.</p>
Full article ">Figure 2
<p>An overall architecture of the attention-guided feature enhancement network (AGFEN). It is a faster R-CNN with a ResNet-50+FPN as its backbone, also with an additional attention-guided feature pyramid network (AGFPN) and a RoI features aggregation (RoI-A) operation. AGFEN uses high-level semantic features to enhance the texture of low-level features and strengthen the boundary information between pedestrians and background so as to achieve the purpose of highlighting pedestrians while suppressing background. In the meantime, the detection network also makes full use of the weight of the classifier to aggregate the features of each RoI, thereby improving the representation of RoIs.</p>
Full article ">Figure 3
<p>The structure of AGFPN. This is the FPN-like network, which is used to add attention information to high-resolution visual information. This process is implemented under the guidance of high-level semantic information.</p>
Full article ">Figure 4
<p>The structure of the attention-guided sampling-based details enhancement module. Note that for simplicity, the diagram uses L3 link as an example to show AGSDE in detail. The AGSDE on the rest of the links follows exactly how this diagram operates.</p>
Full article ">Figure 5
<p>The relationship between the number of RoI feature maps and GFLOPs of ROI-A module.</p>
Full article ">Figure 6
<p>The two detectors were compared visually on CrowdHuman validation set. The solid red line boxes in these pictures represent ground truth, the solid green line boxes represent predictions of the detector, and the dotted red line boxes represent missed detections, respectively. (<b>a</b>) The recent DMSFLN; (<b>b</b>) Our AGFEN.</p>
Full article ">
23 pages, 9333 KiB  
Article
Unique Features of Extremely Halophilic Microbiota Inhabiting Solar Saltworks Fields of Vietnam
by Violetta La Cono, Gina La Spada, Francesco Smedile, Francesca Crisafi, Laura Marturano, Alfonso Modica, Huynh Hoang Nhu Khanh, Pham Duc Thinh, Cao Thi Thuy Hang, Elena A. Selivanova, Ninh Khắc Bản and Michail M. Yakimov
Microorganisms 2024, 12(10), 1975; https://doi.org/10.3390/microorganisms12101975 - 29 Sep 2024
Viewed by 776
Abstract
The artificial solar saltworks fields of Hon Khoi are important industrial and biodiversity resources in southern Vietnam. Most hypersaline environments in this area are characterized by saturated salinity, nearly neutral pH, intense ultraviolet radiation, elevated temperatures and fast desiccation processes. However, the extremely [...] Read more.
The artificial solar saltworks fields of Hon Khoi are important industrial and biodiversity resources in southern Vietnam. Most hypersaline environments in this area are characterized by saturated salinity, nearly neutral pH, intense ultraviolet radiation, elevated temperatures and fast desiccation processes. However, the extremely halophilic prokaryotic communities associated with these stressful environments remain uninvestigated. To fill this gap, a metabarcoding approach was conducted to characterize these communities by comparing them with solar salterns in northern Vietnam as well as with the Italian salterns of Motya and Trapani. Sequencing analyses revealed that the multiple reuses of crystallization ponds apparently create significant perturbations and structural instability in prokaryotic consortia. However, some interesting features were noticed when we examined the diversity of ultra-small prokaryotes belonging to Patescibacteria and DPANN Archaea. Surprisingly, we found at least five deeply branched clades, two from Patescibacteria and three from DPANN Archaea, which seem to be quite specific to the Hon Khoi saltworks field ecosystem and can be considered as a part of biogeographical connotation. Further studies are needed to characterize these uncultivated taxa, to isolate and cultivate them, which will allow us to elucidate their ecological role in these hypersaline habitats and to explore their biotechnological and biomedical potential. Full article
(This article belongs to the Special Issue Halophilic Microorganisms, 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Locations of sampling sites in Vietnam (<b>A</b>) and Italy (<b>B</b>).</p>
Full article ">Figure 2
<p>Evaporation path of seawater in the Na<sup>+</sup> vs. Br<sup>−</sup> (<b>A</b>); Mg<sup>2+</sup> vs. Br<sup>−</sup> (<b>B</b>) and Cl<sup>−</sup> vs. Br<sup>−</sup> plots (<b>C</b>). Average brine compositions of Vietnamese samples are indicated as yellow bubbles. Seawater values are shown as blue bubbles.</p>
Full article ">Figure 3
<p>Bar charts of <span class="html-italic">Bacteria</span> and <span class="html-italic">Archaea</span> identified in all analyzed solar salterns and salt (halite) samples (<b>A</b>) and their relative sequence abundance at the phylum level: (<b>B</b>) <span class="html-italic">Bacteria</span>; (<b>C</b>) <span class="html-italic">Archaea</span>. The hierarchical clustering based on the Bray–Curtis dissimilarity matrix of community compositions is shown above the bar charts.</p>
Full article ">Figure 4
<p>The Mantel test demonstrates the correlation between targeted phyla, present at relative abundances &gt; 1.0%, and physicochemical factors. Mantel’s <span class="html-italic">r</span> coefficient quantifies this relationship, with line width representing correlation strength and color indicating statistical significance based on 9999 permutations (<span class="html-italic">p</span> &lt; 0.01, <span class="html-italic">p</span> &lt; 0.05). A Pearson correlation coefficient matrix reveals the interrelationships among dependent variables.</p>
Full article ">Figure 5
<p>Bar charts of taxonomic classification (at genus level) of <span class="html-italic">Bacteria</span> identified in all analyzed solar salterns and salt (halite) samples. Only genera whose average relative abundances (&gt;1.0%), based on 16S rRNA gene analysis, are shown. Genera with ambiguous affiliation were combined and are shown as NA. Genera with relative abundances less than 1.0% were combined and are depicted as Other. A complete list of all identified genera and their relative abundances is reported in <a href="#app1-microorganisms-12-01975" class="html-app">Tables S1 and S3</a>.</p>
Full article ">Figure 6
<p>Bar charts of taxonomic classification (at genus level) of <span class="html-italic">Archaea</span> identified in all analyzed solar salterns and salt (halite) samples. Only genera whose average relative abundances (&gt;1.0%), based on 16S rRNA gene analysis, are shown. Genera with ambiguous affiliation were joined and shown as Not affiliated. Genera with relative abundances less than 1.0% were joined and depicted as Other. A complete list of all identified genera and their relative abundances is reported in <a href="#app1-microorganisms-12-01975" class="html-app">Tables S2 and S4</a>.</p>
Full article ">Figure 7
<p>Randomized Axelerated Maximum Likelihood (RAxML) tree of 16S rRNA genes of the superphylum <span class="html-italic">Patescibacteria</span> and different phyla of the superphylum DPANN Archaea. A phylogeny was generated using 110 (<span class="html-italic">Nanohaloarchaeota</span>), 16 (<span class="html-italic">Nanoarchaeota</span>), 7 (<span class="html-italic">Aenigmatarchaeota</span>) and 62 (<span class="html-italic">Patescibacteria</span>) distinct ASVs obtained during this study (highlighted in red) and reference GenBank riboclones. The tree was constructed based on taxonomy, assigned to ASVs using a naïve Bayesian classifier method against the Silva Database v138 (<a href="https://www.arb-silva.de/documentation/release-138" target="_blank">https://www.arb-silva.de/documentation/release-138</a> and <a href="https://zenodo.org/record/4587955#.YgKJlb_MJH4" target="_blank">https://zenodo.org/record/4587955#.YgKJlb_MJH4</a> [both last accessed on 20 June 2024]). After alignment, the neighbor-joining algorithm of the ARB v.7.0 software package was used to generate the phylogenetic trees based on distance analysis for 16S rRNA. The tree was additionally inferred in the maximum likelihood framework using the MEGA v.6.0 software. The robustness of inferred topologies was tested by bootstrap resampling using the same distance model (1000 replicates of the original dataset). The scale bar represents the average number of substitutions per site. Deeply branched clades and nanohaloarchaeal sequences obtained in polysaccharidolytic enrichments are highlighted in bold red color. Cultivated nanohaloarchaea (A–F) are shown in insert. Abbreviation used: AA, <span class="html-italic">Aengimatarchaeota</span>; DBCVN, deeply branched clade of Vietnamese <span class="html-italic">Nanohaloarchaeota</span>; DBCVW, deeply branched clade of Vietnamese <span class="html-italic">Woesehaloarchaeota</span>; NA, <span class="html-italic">Nanoarchaeota</span>.</p>
Full article ">Figure 8
<p>Randomized Axelerated Maximum Likelihood (RAxML) tree of 16S rRNA genes of different classes of the superphylum <span class="html-italic">Patescibacteria</span>. A phylogeny was generated using 64 distinct ASVs and reference GenBank riboclones. The robustness of inferred topologies was tested by the bootstrap resampling using the same distance model (1000 replicates of the original dataset). Scale bar represents the average number of substitutions per site. Reference GenBank riboclones obtained from hypersaline environments, soil, groundwater and mud volcanoes are designated by stars and the letters S, W and MV, respectively. Abbreviation used: Absconditabact., <span class="html-italic">Candidatus</span> Absconditabacteria; CmpB, <span class="html-italic">Candidatus</span> Campbellbacteria; GrB, <span class="html-italic">Candidatus</span> Gracilibacteria; NA, not affiliated; PcB, <span class="html-italic">Candidatus</span> Pacebacteria.</p>
Full article ">
16 pages, 4901 KiB  
Article
Ag/Mo Doping for Enhanced Photocatalytic Activity of Titanium (IV) Dioxide during Fuel Desulphurization
by Zahraa A. Hamza, Jamal J. Dawood and Murtadha Abbas Jabbar
Molecules 2024, 29(19), 4603; https://doi.org/10.3390/molecules29194603 - 27 Sep 2024
Viewed by 468
Abstract
Regarding photocatalytic oxidative desulphurization (PODS), titanium oxide (TiO2) is a promising contender as a catalyst due to its photocatalytic prowess and long-term performance in desulphurization applications. This work demonstrates the effectiveness of double-doping TiO2 in silver (Ag) and molybdenum (Mo) [...] Read more.
Regarding photocatalytic oxidative desulphurization (PODS), titanium oxide (TiO2) is a promising contender as a catalyst due to its photocatalytic prowess and long-term performance in desulphurization applications. This work demonstrates the effectiveness of double-doping TiO2 in silver (Ag) and molybdenum (Mo) for use as a novel catalyst in the desulphurization of light-cut hydrocarbons. FESEM, EDS, and AFM were used to characterize the morphology, doping concentration, surface features, grain size, and grain surface area of the Ag/Mo powder. On the other hand, XRD, FTIR spectroscopy, UV-Vis, and PL were used for structure and functional group detection and light absorption analysis based on TiO2’s illumination properties. The microscopic images revealed nanoparticles with irregular shapes, and a 3D-AFM image was used to determine the catalyst’s physiognomies: 0.612 nm roughness and a surface area of 811.79 m2/g. The average sizes of the grains and particles were calculated to be 32.15 and 344.4 nm, respectively. The XRD analysis revealed an anatase structure for the doped TiO2, and the FTIR analysis exposed localized functional groups, while the absorption spectra of the catalyst, obtained via UV-Vis, revealed a broad spectrum, including visible and near-infrared regions up to 1053.34 nm. The PL analysis showed luminescence with a lower emission intensity, indicating that the charge carriers were not thoroughly combined. This study’s findings indicate a desulphurization efficiency of 97%. Additionally, the promise of a nano-homogeneous particle distribution bodes well for catalytic reactions. The catalyst retains its efficiency when it is dried and reused, demonstrating its sustainable use while maintaining the desulphurization efficacy. This study highlights the potential of the double doping approach in enhancing the catalytic properties of TiO2, opening up new possibilities for improving the performance of photo-oxidative processes. Full article
(This article belongs to the Special Issue Advanced Materials for Energy Conversion and Water Sustainability)
Show Figures

Figure 1

Figure 1
<p>Micrograph of the Ag/Mo-doped TiO<sub>2</sub> powder. (<b>a</b>) Catalysed powder particles; (<b>b</b>) discrete particle size.</p>
Full article ">Figure 2
<p>Particle size distribution of Ag/Mo-doped TiO<sub>2</sub>.</p>
Full article ">Figure 3
<p>Three- and two-dimensional AFM images of the deposited Ag/Mo-doped TiO<sub>2</sub> powder.</p>
Full article ">Figure 4
<p>EDS spectrum with mapping of the Ag/Mo-doped TiO<sub>2</sub> catalyst.</p>
Full article ">Figure 5
<p>X-ray diffraction of TiO<sub>2</sub>, Ag-doped TiO<sub>2</sub>, Mo-doped TiO<sub>2</sub>, and Ag/Mo-doped TiO<sub>2</sub>.</p>
Full article ">Figure 6
<p>FTIR spectra of (<b>a</b>) TiO<sub>2</sub>, (<b>b</b>) Ag-doped TiO<sub>2</sub>, (<b>c</b>) Mo-doped TiO<sub>2</sub>, and (<b>d</b>) Ag/Mo-doped TiO<sub>2</sub>.</p>
Full article ">Figure 7
<p>UV-Vis spectra of TiO<sub>2</sub>, Ag-doped TiO<sub>2</sub>, Mo-doped TiO<sub>2</sub>, and Ag/Mo-doped TiO<sub>2</sub>.</p>
Full article ">Figure 8
<p>PL spectra of the catalysts.</p>
Full article ">Figure 9
<p>Desulphurization efficiency of TiO<sub>2</sub> catalysts.</p>
Full article ">Figure 10
<p>Desulphurization efficiency of the catalysts with exposure over time.</p>
Full article ">Figure 11
<p>Sol–gel method used to prepare the Ag/Mo-doped TiO<sub>2</sub>.</p>
Full article ">
23 pages, 9665 KiB  
Article
Effects of Powder Reuse and Particle Size Distribution on Structural Integrity of Ti-6Al-4V Processed via Laser Beam Directed Energy Deposition
by MohammadBagher Mahtabi, Aref Yadollahi, Courtney Morgan-Barnes, Matthew W. Priddy and Hongjoo Rhee
J. Manuf. Mater. Process. 2024, 8(5), 209; https://doi.org/10.3390/jmmp8050209 - 25 Sep 2024
Viewed by 1208
Abstract
In metal additive manufacturing, reusing collected powder from previous builds is a standard practice driven by the substantial cost of metal powder. This approach not only reduces material expenses but also contributes to sustainability by minimizing waste. Despite its benefits, powder reuse introduces [...] Read more.
In metal additive manufacturing, reusing collected powder from previous builds is a standard practice driven by the substantial cost of metal powder. This approach not only reduces material expenses but also contributes to sustainability by minimizing waste. Despite its benefits, powder reuse introduces challenges related to maintaining the structural integrity of the components, making it a critical area of ongoing research and innovation. The reuse process can significantly alter powder characteristics, including flowability, size distribution, and chemical composition, subsequently affecting the microstructures and mechanical properties of the final components. Achieving repeatable and consistent printing outcomes requires powder particles to maintain specific and consistent physical and chemical properties. Variations in powder characteristics can lead to inconsistencies in the microstructural features of printed components and the formation of process-induced defects, compromising the quality and reliability of the final products. Thus, optimizing the powder recovery and reuse methodology is essential to ensure that cost reduction and sustainability benefits do not compromise product quality and reliability. This study investigated the impact of powder reuse and particle size distribution on the microstructural and mechanical properties of Ti-6Al-4V specimens fabricated using a laser beam directed energy deposition technique. Detailed evaluations were conducted on reused powders with two different size distributions, which were compared with their virgin counterparts. Microstructural features and process-induced defects were examined using scanning electron microscopy and X-ray computed tomography. The findings reveal significant alterations in the elemental composition of reused powder, with distinct trends observed for small and large particles. Additionally, powder reuse substantially influenced the formation of process-induced defects and, consequently, the fatigue performance of the components. Full article
(This article belongs to the Special Issue Fatigue and Fracture Mechanics in Additive Manufacturing)
Show Figures

Figure 1

Figure 1
<p>Schematic of the powder reuse process employed in this study for powders with small and large particle size distributions (PSDs).</p>
Full article ">Figure 2
<p>SEM images of plasma-atomized Ti-6Al-4V powders in different conditions: (<b>a</b>) virgin–small, (<b>b</b>) virgin–large, (<b>c</b>) reused–small, and (<b>d</b>) reused–large.</p>
Full article ">Figure 3
<p>Particle size distributions (PSDs) of small and large powder samples before and after reuse, measured using laser diffraction.</p>
Full article ">Figure 4
<p>Powder flow rates of samples with small and large particle size distributions (PSDs) before and after reuse, measured using a standard Hall Flowmeter.</p>
Full article ">Figure 5
<p>Elemental compositions of powder samples with small and large particle size distributions (PSDs) before and after reuse.</p>
Full article ">Figure 6
<p>XRD patterns of powder samples with small and large particle size distributions (PSDs) before and after reuse.</p>
Full article ">Figure 7
<p>SEM images of the microstructure for specimens fabricated using (<b>a</b>,<b>b</b>) small powder particles and (<b>b</b>,<b>d</b>) large powder particles in the virgin (<b>a</b>,<b>c</b>) and reused (<b>b</b>,<b>d</b>) conditions.</p>
Full article ">Figure 8
<p>XCT images of DED-LB Ti-6Al-4V fatigue specimens fabricated using powders with small and large particle size distributions (PSDs) before and after reuse.</p>
Full article ">Figure 9
<p>Volumes of detected defects for DED-LB Ti-6Al-4V specimens fabricated using (<b>a</b>) small and (<b>b</b>) large particle size distributions (PSDs) before and after reuse.</p>
Full article ">Figure 10
<p>Vickers microhardness values for DED-LB Ti-6Al-4V specimens fabricated using powders with small and large particle size distributions (PSDs) before and after reuse.</p>
Full article ">Figure 11
<p>Fully-reversed (R = −1) fatigue stress–life data for DED-LB Ti-6Al-4V specimens fabricated using powders with small and large particle size distributions (PSDs) before and after reuse.</p>
Full article ">Figure 12
<p>Cumulative volume fraction of powder particles with small and large particle size distributions (PSDs) before and after reuse.</p>
Full article ">Figure 13
<p>(<b>a</b>) Probability density function and (<b>b</b>) cumulative distribution function using extreme value distribution (Gumbel) for detecting defects in specimens fabricated using powders with small and large particle size distributions (PSDs) before and after reuse.</p>
Full article ">Figure 14
<p>Fatigue fracture surfaces of the Ti-6Al-4V specimens fabricated using small powder particles in (<b>a</b>) virgin and (<b>b</b>) reused states.</p>
Full article ">
15 pages, 5216 KiB  
Article
Analyzing Traditional Building Materials: A Case Study on Repair Practices in Konuralp, Düzce-Türkiye
by Özlem Sallı Bideci and Büşra Sabuncu
Architecture 2024, 4(3), 763-777; https://doi.org/10.3390/architecture4030040 - 19 Sep 2024
Viewed by 638
Abstract
Some wrong decisions and faulty practices applied during the repair and restoration of traditional buildings cause more damage to the structures due to the materials used in the repair. The aim of this study is to establish a scientific basis for material selection [...] Read more.
Some wrong decisions and faulty practices applied during the repair and restoration of traditional buildings cause more damage to the structures due to the materials used in the repair. The aim of this study is to establish a scientific basis for material selection in the repair of traditional buildings in the Konuralp region through chemical and petrographic analyses. In this study, brick, mortar, plaster, and wood samples were taken from one registered building in the Konuralp neighborhood of Düzce Province that has survived to the present day by preserving its original structural features and reflecting the characteristics of traditional housing. Chemical and petrographic analyses were carried out on the samples. In line with these analyses, a scientific basis was created for selecting material properties in the repair and reuse processes of traditional buildings and suggestions are made for the analysis of materials specific to traditional buildings in Konuralp. Full article
Show Figures

Figure 1

Figure 1
<p>The location map and a visual of the location of the plot in question.</p>
Full article ">Figure 2
<p>The bird’s eye view, plans, and section and layout plans of the building.</p>
Full article ">Figure 3
<p>Mortar and plaster samples of building number 1766: (<b>a</b>) interior plaster number H5, (<b>b</b>) exterior plaster number H6, and (<b>c</b>) filling mortar number H7.</p>
Full article ">Figure 4
<p>Images of wood samples: (<b>a</b>) cabinet joinery and (<b>b</b>) floor covering.</p>
Full article ">Figure 5
<p>Plot 1766. Ground floor sampling numbers are given on the blueprint.</p>
Full article ">Figure 6
<p>Sieve analysis of the bulk size distributions of the parts that do not undergo acid changes.</p>
Full article ">Figure 7
<p>(<b>a</b>) Stereomicroscope and (<b>b</b>) polarizing microscope images of sample H5.</p>
Full article ">Figure 8
<p>(<b>a</b>) Stereomicroscope and (<b>b</b>) polarizing microscope images of sample H6.</p>
Full article ">Figure 9
<p>(<b>a</b>) Stereomicroscope and (<b>b</b>) polarizing microscope images of sample H7.</p>
Full article ">Figure 10
<p>SEM images and EDX analysis element and oxide ratios of the (<b>a</b>) H5, (<b>b</b>) H6, and (<b>c</b>) H7 samples.</p>
Full article ">Figure 11
<p>Thin section images of A2 sample.</p>
Full article ">Figure 12
<p>Thin section images of sample A3.</p>
Full article ">
30 pages, 11567 KiB  
Article
Gini Coefficient-Based Feature Learning for Unsupervised Cross-Domain Classification with Compact Polarimetric SAR Data
by Xianyu Guo, Junjun Yin, Kun Li and Jian Yang
Agriculture 2024, 14(9), 1511; https://doi.org/10.3390/agriculture14091511 - 3 Sep 2024
Viewed by 730
Abstract
Remote sensing image classification usually needs many labeled samples so that the target nature can be fully described. For synthetic aperture radar (SAR) images, variations of the target scattering always happen to some extent due to the imaging geometry, weather conditions, and system [...] Read more.
Remote sensing image classification usually needs many labeled samples so that the target nature can be fully described. For synthetic aperture radar (SAR) images, variations of the target scattering always happen to some extent due to the imaging geometry, weather conditions, and system parameters. Therefore, labeled samples in one image could not be suitable to represent the same target in other images. The domain distribution shift of different images reduces the reusability of the labeled samples. Thus, exploring cross-domain interpretation methods is of great potential for SAR images to improve the reuse rate of existing labels from historical images. In this study, an unsupervised cross-domain classification method is proposed that utilizes the Gini coefficient to rank the robust and stable polarimetric features in both the source and target domains (GRFST) such that an unsupervised domain adaptation (UDA) can be achieved. This method selects the optimal features from both the source and target domains to alleviate the domain distribution shift. Both fully polarimetric (FP) and compact polarimetric (CP) SAR features are explored for crop-domain terrain type classification. Specifically, the CP mode refers to the hybrid dual-pol mode with an arbitrary transmitting ellipse wave. This is the first attempt in the open literature to investigate the representing abilities of different CP modes for cross-domain terrain classification. Experiments are conducted from four aspects to demonstrate the performance of CP modes for cross-data, cross-scene, and cross-crop type classification. Results show that the GRFST-UDA method yields a classification accuracy of 2% to 12% higher than the traditional UDA methods. The degree of scene similarity has a certain impact on the accuracy of cross-domain crop classification. It was also found that when both the FP and circular CP SAR data are used, stable, promising results can be achieved. Full article
(This article belongs to the Special Issue Applications of Remote Sensing in Agricultural Soil and Crop Mapping)
Show Figures

Figure 1

Figure 1
<p>The Pauli decomposition of FP SAR data ((<b>a</b>–<b>d</b>) are SAR images from four radar satellites (RADARSAT-2, ALOS-1, ALOS-2, and GF-3) of San Francisco, respectively. (<b>e</b>) is a SAR image from GF-3 of Qingdao. (<b>f</b>) is a SAR image from RADARSAT-2 of Jiangsu. And (<b>g</b>) is a SAR image from RADARSAT-2 of Yellow River).</p>
Full article ">Figure 1 Cont.
<p>The Pauli decomposition of FP SAR data ((<b>a</b>–<b>d</b>) are SAR images from four radar satellites (RADARSAT-2, ALOS-1, ALOS-2, and GF-3) of San Francisco, respectively. (<b>e</b>) is a SAR image from GF-3 of Qingdao. (<b>f</b>) is a SAR image from RADARSAT-2 of Jiangsu. And (<b>g</b>) is a SAR image from RADARSAT-2 of Yellow River).</p>
Full article ">Figure 2
<p>Field investigation pictures of five kinds of ground objects in Jiangsu ((<b>a</b>) T-H. (<b>b</b>) D-J. (<b>c</b>) urban. (<b>d</b>) shoal. (<b>e</b>) water).</p>
Full article ">Figure 3
<p>Field investigation pictures of three kinds of ground objects in Yellow River ((<b>a</b>) wheat. (<b>b</b>) water. (<b>c</b>) urban).</p>
Full article ">Figure 4
<p>Flow chart of the methodology.</p>
Full article ">Figure 5
<p>Feature importance ranking for source and target domains for CP SAR data with circular polarization transmitting.</p>
Full article ">Figure 6
<p>The overall accuracy of cross-domain classification from SAR satellites with different band channels based on the UDA method for CP SAR data with circular polarization transmitting ((<b>a</b>) SA result. (<b>b</b>) TCA result. (<b>c</b>) JDA result. (<b>d</b>) CORAL result. (<b>e</b>) BDA result. (<b>f</b>) GFK result. (<b>g</b>) MEDA result. (<b>h</b>) MEAN result).</p>
Full article ">Figure 7
<p>The overall accuracy of cross-domain classification for CP SAR data with circular polarization transmitting ((<b>a</b>) mean accuracy of cross-domain image classification. (<b>b</b>) mean accuracy of different SAR frequency bands cross-domain image classification. All CP-UDA: cross-domain classification based on all CP features. GFRS-UDA: cross-domain classification based on Gini coefficient feature ranking only in the source domain. GFRST-UDA: cross-domain classification based on the proposed method. Supervision: supervised classification based on the K-Nearest Neighbor classifier).</p>
Full article ">Figure 8
<p>Cross-domain images (ALOS1-Sanf→GF3-Sanf) classification maps for CP SAR data with circular polarization transmitting ((<b>a</b>) SA result. (<b>b</b>) GFRS-SA result. (<b>c</b>) GFRST-SA result. (<b>d</b>) Supervision result).</p>
Full article ">Figure 9
<p>The scatter plots of source and target domains feature alignment (<b>a</b>–<b>d</b>) UJDA scatter plots. (<b>a<sub>1</sub></b>–<b>d<sub>1</sub></b>) GFRS-UJDA scatter plots. (<b>a<sub>2</sub></b>–<b>d<sub>2</sub></b>) GFRST-UJDA scatter plots. (<b>a</b>,<b>a<sub>1</sub></b>,<b>a<sub>2</sub></b>) scatter plots of the source domain. (<b>b</b>,<b>b<sub>1</sub></b>,<b>b<sub>2</sub></b>) scatter plots of the aligned source domain. (<b>c</b>,<b>c<sub>1</sub></b>,<b>c<sub>2</sub></b>) scatter plots of the aligned target domain. (<b>d</b>,<b>d<sub>1</sub></b>,<b>d<sub>2</sub></b>) scatter plots of the target domain.).</p>
Full article ">Figure 9 Cont.
<p>The scatter plots of source and target domains feature alignment (<b>a</b>–<b>d</b>) UJDA scatter plots. (<b>a<sub>1</sub></b>–<b>d<sub>1</sub></b>) GFRS-UJDA scatter plots. (<b>a<sub>2</sub></b>–<b>d<sub>2</sub></b>) GFRST-UJDA scatter plots. (<b>a</b>,<b>a<sub>1</sub></b>,<b>a<sub>2</sub></b>) scatter plots of the source domain. (<b>b</b>,<b>b<sub>1</sub></b>,<b>b<sub>2</sub></b>) scatter plots of the aligned source domain. (<b>c</b>,<b>c<sub>1</sub></b>,<b>c<sub>2</sub></b>) scatter plots of the aligned target domain. (<b>d</b>,<b>d<sub>1</sub></b>,<b>d<sub>2</sub></b>) scatter plots of the target domain.).</p>
Full article ">Figure 10
<p>The histograms of the dispersion coefficient of source and target domains and aligned source and target domains ((<b>a</b>–<b>c</b>) are histograms of dispersion coefficient based on UJDA, GFRS-UJDA, and GFRST-UJDA methods, respectively).</p>
Full article ">Figure 11
<p>The overall accuracy statistics of cross-domain image classification for CP SAR data with circular polarization transmitting.</p>
Full article ">Figure 12
<p>Cross-domain image classification results for CP SAR data with circular polarization transmitting ((<b>a</b>–<b>d</b>) (GF3-Qingdao→RS2-sanf) and (<b>a<sub>1</sub></b>–<b>d<sub>1</sub></b>) (ALOS2-sanf→GF3-Qingdao) are cross-domain image classification maps based on SA, GFRS-SA, GFRST-SA, and supervision classification methods, respectively).</p>
Full article ">Figure 13
<p>The scatter plots of source and target domain feature alignment ((<b>a</b>–<b>d</b>) JDA scatter plots. (<b>a<sub>1</sub></b>–<b>d<sub>1</sub></b>) GFRST-JDA scatter plots. (<b>a</b>,<b>a<sub>1</sub></b>) scatter plots of the source domain. (<b>b</b>,<b>b<sub>1</sub></b>) scatter plots of the aligned source domain. (<b>c</b>,<b>c<sub>1</sub></b>) scatter plots of the aligned target domain. (<b>d</b>,<b>d<sub>1</sub></b>) scatter plots of the target domain).</p>
Full article ">Figure 14
<p>The overall accuracy of cross-domain classification based on the UDA and GFRST-UDA methods for the FP + GCP SAR data ((<b>a</b>–<b>d</b>) are overall accuracy for the FP + CP SAR (<span class="html-italic">θ</span> = π/4, <span class="html-italic">χ</span> = π/4), the FP + CP SAR (<span class="html-italic">θ</span> = π/4, <span class="html-italic">χ</span> = π/6), the FP + CP SAR (<span class="html-italic">θ</span> = π/4, <span class="html-italic">χ</span> = π/8), and the FP + CP SAR (<span class="html-italic">θ</span> = π/4, <span class="html-italic">χ</span> = 0), respectively).</p>
Full article ">Figure 15
<p>Cross-domain image classification results for the FP + CP SAR (<span class="html-italic">θ</span> = π/4, <span class="html-italic">χ</span> = π/4) data (<b>a</b>–<b>h</b>) SA results. (<b>a<sub>1</sub></b>–<b>h<sub>1</sub></b>) GFRST-SA results. The results from (<b>a</b>–<b>h</b>) and from (<b>a<sub>1</sub></b>–<b>h<sub>1</sub></b>) correspond to eight cross-domain pair classification maps, respectively.</p>
Full article ">Figure 16
<p>The overall accuracy statistics of cross-domain classification for the FP + CP SAR and the FP SAR data, respectively ((<b>a</b>) UDA result. (<b>b</b>) GFRST-UDA result).</p>
Full article ">Figure 17
<p>The overall accuracy of cross-domain classification based on the UDA and GFRST-UDA methods for the GCP SAR data (<b>a</b>–<b>d</b>) are overall accuracy for the FP + CP SAR (<span class="html-italic">θ</span> = π/4, <span class="html-italic">χ</span> = π/4), the FP + CP SAR (<span class="html-italic">θ</span> = π/4, <span class="html-italic">χ</span> = π/6), the FP + CP SAR (<span class="html-italic">θ</span> = π/4, <span class="html-italic">χ</span> = π/8), and the FP + CP SAR (<span class="html-italic">θ</span> = π/4, <span class="html-italic">χ</span> = 0), respectively.</p>
Full article ">Figure 18
<p>Cross-domain image classification maps for the FP + CP SAR (<span class="html-italic">θ</span> = π/4, <span class="html-italic">χ</span> = π/4), the FP + CP SAR (<span class="html-italic">θ</span> = π/4, <span class="html-italic">χ</span> = π/6), the FP + CP SAR (<span class="html-italic">θ</span> = π/4, <span class="html-italic">χ</span> = π/8), the FP + CP SAR (<span class="html-italic">θ</span> = π/4, <span class="html-italic">χ</span> = 0) and the FP SAR data based on the GFRST-USA method, respectively ((<b>a</b>–<b>e</b>,<b>a<sub>1</sub></b>–<b>e<sub>1</sub></b>,<b>a<sub>2</sub></b>–<b>e<sub>2,</sub>a<sub>3</sub></b>–<b>e<sub>3</sub></b>) are cross-domain images (GF3 Qingdao, RS2 Sanf, RS2 Jiangsu T-H and RS2 Jiangsu D-J→RS2 Yellow River) results, respectively).</p>
Full article ">Figure 19
<p>The overall accuracy of cross-domain classification for the FP + CP SAR and the FP SAR data, respectively ((<b>a</b>) UDA result. (<b>b</b>) GFRST-UDA result).</p>
Full article ">
18 pages, 5527 KiB  
Article
Leveraging Off-the-Shelf WiFi for Contactless Activity Monitoring
by Zixuan Zhu, Wei Liu, Hao Zhang and Jinhu Lu
Electronics 2024, 13(17), 3351; https://doi.org/10.3390/electronics13173351 - 23 Aug 2024
Viewed by 516
Abstract
Monitoring human activities, such as walking, falling, and jumping, provides valuable information for personalized health assistants. Existing solutions require the user to carry/wear certain smart devices to capture motion/audio data, use a high-definition camera to record video data, or deploy dedicated devices to [...] Read more.
Monitoring human activities, such as walking, falling, and jumping, provides valuable information for personalized health assistants. Existing solutions require the user to carry/wear certain smart devices to capture motion/audio data, use a high-definition camera to record video data, or deploy dedicated devices to collect wireless data. However, none of these solutions are widely adopted for reasons such as discomfort, privacy, and overheads. Therefore, an effective solution to provide non-intrusive, secure, and low-cost human activity monitoring is needed. In this study, we developed a contactless human activity monitoring system that utilizes channel state information (CSI) of the existing ubiquitous WiFi signals. Specifically, we deployed a low-cost commercial off-the-shelf (COTS) router as a transmitter and reused a desktop equipped with an Intel WiFi Link 5300 NIC as a receiver, allowing us to obtain CSI data that recorded human activities. To remove the outliers and ambient noise existing in raw CSI signals, an integrated filter consisting of Hampel, wavelet, and moving average filters was designed. Then, a new metric based on kurtosis and standard deviation was designed to obtain an optimal set of subcarriers that is sensitive to all target activities from the candidate 30 subcarriers. Finally, we selected a group of features, including time- and frequency-domain features, and trained a classification model to recognize different indoor human activities. Our experimental results demonstrate that the proposed system can achieve a mean accuracy of above 93%, even in the face of a long sensing distance. Full article
(This article belongs to the Special Issue Recent Research in Positioning and Activity Recognition Systems)
Show Figures

Figure 1

Figure 1
<p>Illustration of WiFi signal multipath propagation.</p>
Full article ">Figure 2
<p>Overview diagram of the SVM algorithm.</p>
Full article ">Figure 3
<p>Proposed system architecture.</p>
Full article ">Figure 4
<p>Deployment of the proposed system in an office.</p>
Full article ">Figure 5
<p>Raw CSI signals.</p>
Full article ">Figure 6
<p>Hampel-filtered CSI signals.</p>
Full article ">Figure 7
<p>Wavelet-filtered CSI signals.</p>
Full article ">Figure 8
<p>CSI signals after applying moving average filter.</p>
Full article ">Figure 9
<p>CSI signal changes under human activity. (<b>a</b>) Raw CSI signals. (<b>b</b>) Filtered CSI signals.</p>
Full article ">Figure 10
<p>Calculated sensitivity of each subcarrier to human activity.</p>
Full article ">Figure 11
<p>CSI amplitude under different human activities (subcarrier 30).</p>
Full article ">Figure 12
<p>The overall performance of the proposed system.</p>
Full article ">Figure 13
<p>The recognition performance of the system for different users.</p>
Full article ">Figure 14
<p>The impact of transceiver distance.</p>
Full article ">Figure 15
<p>The impact of transceiver height.</p>
Full article ">
21 pages, 2501 KiB  
Article
RetinaViT: Efficient Visual Backbone for Online Video Streams
by Tomoyuki Suzuki and Yoshimitsu Aoki
Sensors 2024, 24(17), 5457; https://doi.org/10.3390/s24175457 - 23 Aug 2024
Viewed by 686
Abstract
In online video understanding, which has a wide range of real-world applications, inference speed is crucial. Many approaches involve frame-level visual feature extraction, which often represents the biggest bottleneck. We propose RetinaViT, an efficient method for extracting frame-level visual features in an online [...] Read more.
In online video understanding, which has a wide range of real-world applications, inference speed is crucial. Many approaches involve frame-level visual feature extraction, which often represents the biggest bottleneck. We propose RetinaViT, an efficient method for extracting frame-level visual features in an online video stream, aiming to fundamentally enhance the efficiency of online video understanding tasks. RetinaViT is composed of efficiently approximated Transformer blocks that only take changed tokens (event tokens) as queries and reuse the already processed tokens from the previous timestep for the others. Furthermore, we restrict keys and values to the spatial neighborhoods of event tokens to further improve efficiency. RetinaViT involves tuning multiple parameters, which we determine through a multi-step process. During model training, we randomly vary these parameters and then perform black-box optimization to maximize accuracy and efficiency on the pre-trained model. We conducted extensive experiments on various online video recognition tasks, including action recognition, pose estimation, and object segmentation, validating the effectiveness of each component in RetinaViT and demonstrating improvements in the speed/accuracy trade-off compared to baselines. In particular, for action recognition, RetinaViT built on ViT-B16 reduces inference time by approximately 61.9% on the CPU and 50.8% on the GPU, while achieving slight accuracy improvements rather than degradation. Full article
Show Figures

Figure 1

Figure 1
<p>Overview of RetinaViT. Based on Vision Transformer (ViT) [<a href="#B13-sensors-24-05457" class="html-bibr">13</a>], RetinaViT converts an input frame image into tokens and processes them with a stack of Transformer blocks. The key difference is that RetinaViT detects tokens that have changed compared to those at the previous timestep in the same stage (block), referred to as event tokens. It then inputs only these event tokens as queries to the Transformer blocks for feature extraction, while reusing the previous tokens for the rest (represented by rectangles with red edges). For simplicity, this figure does not show the restriction of keys and values to the neighborhood of event tokens (see <a href="#sensors-24-05457-f002" class="html-fig">Figure 2</a> and <a href="#sec3dot1-sensors-24-05457" class="html-sec">Section 3.1</a> for details). This simple framework is task-agnostic, and RetinaViT can be used as the backbone for a wide range of online video recognition tasks.</p>
Full article ">Figure 2
<p>Original Transformer block (<b>top</b>) and Retina block (<b>bottom</b>). Each rectangle represents a token. In the Retina block, only event tokens, i.e., tokens that have changed over time, are input as queries, and the previous information is reused for the rest. In addition, by restricting the context tokens to the spatial neighborhood of the event tokens, the computational cost is further reduced.</p>
Full article ">Figure 3
<p>Sanity check on 50Salads [<a href="#B70-sensors-24-05457" class="html-bibr">70</a>]. We present trade-off curves between the accuracy and inference time on the CPU (<b>left</b>) and GPU (<b>right</b>). In the legends, the start and end points of each arrow represent training and inference strategies, respectively. “Origin” in each graph represents the result where no token selection is used (original ViT). We draw dashed arrows representing the improvements in the trade-off between our method (event-based -&gt; event-based) and the corresponding “origin”.</p>
Full article ">Figure 4
<p>Ablation results for the locations of event token detection on 50Salads [<a href="#B70-sensors-24-05457" class="html-bibr">70</a>]. We show trade-off curves between the accuracy and inference time on the CPU (<b>left</b>) and GPU (<b>right</b>). Note that we do not show “origin” (original ViT) in this figure to clarify the differences, but we have successfully improved the trade-off significantly compared to the “origin” as shown in <a href="#sensors-24-05457-f003" class="html-fig">Figure 3</a>.</p>
Full article ">Figure 5
<p>Ablation results for local context tokens and post-fine-tuning on 50Salads [<a href="#B70-sensors-24-05457" class="html-bibr">70</a>]. We show trade-off curves between the accuracy and inference time on the CPU (<b>left</b>) and GPU (<b>right</b>). Note that we do not show “origin” (original ViT) in this figure to clarify the differences, but we have successfully improved the trade-off significantly compared to the “origin” as shown in <a href="#sensors-24-05457-f003" class="html-fig">Figure 3</a>.</p>
Full article ">Figure 6
<p>Comparisons of the learning curves between RetinaViT-S16 and the original ViT-S16. The vertical axis represents the loss for each task, and the horizontal axis represents the number of epochs. In all datasets, the validation learning curves of RetinaViT are relatively stable.</p>
Full article ">Figure 7
<p>Comparisons of the trade-off between the accuracy and inference time on 50Salads [<a href="#B70-sensors-24-05457" class="html-bibr">70</a>] (val). Inference time was measured on the CPU (<b>left</b>) and GPU (<b>right</b>). “Sw” and “Res” represent Swin Transformer [<a href="#B18-sensors-24-05457" class="html-bibr">18</a>] and ResNet [<a href="#B82-sensors-24-05457" class="html-bibr">82</a>], respectively. “DC” represents DeltaCNN [<a href="#B11-sensors-24-05457" class="html-bibr">11</a>], which we only use on the GPU as it does not support CPU inference. Dashed arrows represent the improvements in the trade-off between RetinaViT and the corresponding original ViT.</p>
Full article ">Figure 8
<p>Comparisons of the trade-off between accuracy (PCK@0.2 [<a href="#B75-sensors-24-05457" class="html-bibr">75</a>]) and inference time on Sub-JHMDB [<a href="#B74-sensors-24-05457" class="html-bibr">74</a>] (val). Inference time was measured on the CPU (<b>left</b>) and GPU (<b>right</b>). “HR” represents HR-Net [<a href="#B83-sensors-24-05457" class="html-bibr">83</a>]. “DC” represents DeltaCNN [<a href="#B11-sensors-24-05457" class="html-bibr">11</a>], which we only use on the GPU as it does not support CPU inference.</p>
Full article ">Figure 9
<p>Trade-off comparisons between the accuracy (<math display="inline"><semantics> <mi mathvariant="script">G</mi> </semantics></math> score [<a href="#B77-sensors-24-05457" class="html-bibr">77</a>]) and inference time on DAVIS17 [<a href="#B77-sensors-24-05457" class="html-bibr">77</a>] (test-dev). Inference time was measured on the CPU (<b>left</b>) and GPU (<b>right</b>). “DC” represents DeltaCNN [<a href="#B11-sensors-24-05457" class="html-bibr">11</a>], which we only use on the GPU as it does not support CPU inference.</p>
Full article ">Figure 10
<p>Visualization of event scores, event tokens, and predictions on 50Salads [<a href="#B70-sensors-24-05457" class="html-bibr">70</a>]. The frames are arranged from left to right in time order, and each column represents the same timestep. “no drop” represents the prediction without dropping tokens (i.e., <math display="inline"><semantics> <mrow> <msub> <mi>δ</mi> <mi>l</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> for all <span class="html-italic">l</span>) overlaid on the input frames. “drop” represents the event tokens in the fourth block, on which the corresponding prediction is overlaid. For visibility, we overlaid the locations of event tokens on the corresponding input RGB frames, where the non-event tokens are blacked out. The predictions are drawn in green if correct and in red if incorrect. “event score” represents the event scores as a heatmap at the input for the fourth block. Note that all tokens are processed in the first frame of each video clip, which is not depicted in the figure.</p>
Full article ">Figure 11
<p>Visualization of event scores, event tokens, and predictions on Sub-JHMDB [<a href="#B84-sensors-24-05457" class="html-bibr">84</a>]. The frames are arranged from left to right in time order, and each column represents the same timestep. “no drop” represents the prediction (the locations of the key points) without dropping tokens (i.e., <math display="inline"><semantics> <mrow> <msub> <mi>δ</mi> <mi>l</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> for all <span class="html-italic">l</span>), overlaid on the input frames. “drop” represents the event tokens in the fourth block, on which the corresponding prediction is overlaid. For visibility, we overlaid the locations of event tokens on the corresponding input RGB frames, where the non-event tokens are blacked out. “event score” represents the event scores as a heatmap at the input for the fourth block. Note that all tokens are processed in the first frame of each video clip, which is not depicted in the figure.</p>
Full article ">Figure 12
<p>Visualization of event scores, event tokens, and predictions on DAVIS2017 [<a href="#B77-sensors-24-05457" class="html-bibr">77</a>]. The frames are arranged from left to right in time order, and each column represents the same timestep. “no drop” represents the prediction masks without dropping tokens (i.e., <math display="inline"><semantics> <mrow> <msub> <mi>δ</mi> <mi>l</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> for all <span class="html-italic">l</span>), overlaid on the input frames. “drop” represents the event tokens in the fourth block, on which the corresponding prediction is overlaid. For visibility, we overlaid the locations of event tokens on the corresponding input RGB frames, where the non-event tokens are blacked out. The prediction masks are drawn in different colors for each instance. “event score” represents the event scores as a heatmap at the input for the fourth block. For the last two videos, we show two “drops” with different thresholds <math display="inline"><semantics> <msub> <mi>δ</mi> <mi>l</mi> </msub> </semantics></math> as examples where it is difficult to reduce computational costs while maintaining accuracy due to large camera motion. Note that all tokens are processed in the first frame of each video clip, which is not depicted in the figure.</p>
Full article ">
Back to TopTop