Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,945)

Search Parameters:
Keywords = visual tracking

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
38 pages, 1648 KiB  
Systematic Review
Using Eye-Tracking to Assess Dyslexia: A Systematic Review of Emerging Evidence
by Eugenia I. Toki
Educ. Sci. 2024, 14(11), 1256; https://doi.org/10.3390/educsci14111256 (registering DOI) - 17 Nov 2024
Abstract
Reading is a complex skill that requires accurate word recognition, fluent decoding, and effective comprehension. Children with dyslexia often face challenges in these areas, resulting in ongoing reading difficulties. This study systematically reviews the use of eye-tracking technology to assess dyslexia, following the [...] Read more.
Reading is a complex skill that requires accurate word recognition, fluent decoding, and effective comprehension. Children with dyslexia often face challenges in these areas, resulting in ongoing reading difficulties. This study systematically reviews the use of eye-tracking technology to assess dyslexia, following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 guidelines. The review identifies the specific types of eye-tracking technologies used, examines the cognitive and behavioral abilities assessed (such as reading fluency and attention), and evaluates the primary purposes of these evaluations—screening, assessment, and diagnosis. This study explores key questions, including how eye-tracking outcomes guide intervention strategies and influence educational practices, and assesses the practicality and time efficiency of these evaluations in real-world settings. Furthermore, it considers whether eye-tracking provides a holistic developmental profile or a targeted analysis of specific skills and evaluates the generalizability of eye-tracking results across diverse populations. Gaps in the literature are highlighted, with recommendations proposed to improve eye-tracking’s precision and applicability for early dyslexia intervention. The findings underscore the potential of eye-tracking to enhance diagnostic accuracy through metrics such as fixation counts, saccadic patterns, and processing speed, key indicators that distinguish dyslexic from typical reading behaviors. Additionally, studies show that integrating machine learning with eye-tracking data can enhance classification accuracy, suggesting promising applications for scalable, early dyslexia screening in educational settings. This review provides new insights into the value of eye-tracking technology in identifying dyslexia, emphasizing the need for further research to refine these methods and support their adoption in classrooms and clinics. Full article
(This article belongs to the Special Issue Innovative Practices for Students with Learning Disabilities)
18 pages, 1664 KiB  
Article
FETrack: Feature-Enhanced Transformer Network for Visual Object Tracking
by Hang Liu, Detian Huang and Mingxin Lin
Appl. Sci. 2024, 14(22), 10589; https://doi.org/10.3390/app142210589 (registering DOI) - 17 Nov 2024
Viewed by 114
Abstract
Visual object tracking is a fundamental task in computer vision, with applications ranging from video surveillance to autonomous driving. Despite recent advances in transformer-based one-stream trackers, unrestricted feature interactions between the template and the search region often introduce background noise into the template, [...] Read more.
Visual object tracking is a fundamental task in computer vision, with applications ranging from video surveillance to autonomous driving. Despite recent advances in transformer-based one-stream trackers, unrestricted feature interactions between the template and the search region often introduce background noise into the template, degrading the tracking performance. To address this issue, we propose FETrack, a feature-enhanced transformer-based network for visual object tracking. Specifically, we incorporate an independent template stream in the encoder of the one-stream tracker to acquire the high-quality template features while suppressing the harmful background noise effectively. Then, we employ a sequence-learning-based causal transformer in the decoder to generate the bounding box autoregressively, simplifying the prediction head network. Further, we present a dynamic threshold-based online template-updating strategy and a template-filtering approach to boost tracking robustness and reduce redundant computations. Extensive experiments demonstrate that our FETrack achieves a superior performance over state-of-the-art trackers. Specifically, the proposed FETrack achieves a 75.1% AO on GOT-10k, 81.2% AUC on LaSOT, and 89.3% Pnorm on TrackingNet. Full article
(This article belongs to the Special Issue Applications in Computer Vision and Image Processing)
32 pages, 5678 KiB  
Article
Anti-Collision Path Planning and Tracking of Autonomous Vehicle Based on Optimized Artificial Potential Field and Discrete LQR Algorithm
by Chaoxia Zhang, Zhihao Chen, Xingjiao Li and Ting Zhao
World Electr. Veh. J. 2024, 15(11), 522; https://doi.org/10.3390/wevj15110522 - 14 Nov 2024
Viewed by 303
Abstract
This paper introduces an enhanced APF method to address challenges in automatic lane changing and collision avoidance for autonomous vehicles, targeting issues of infeasible target points, local optimization, inadequate safety margins, and instability when using DLQR. By integrating a distance adjustment factor, this [...] Read more.
This paper introduces an enhanced APF method to address challenges in automatic lane changing and collision avoidance for autonomous vehicles, targeting issues of infeasible target points, local optimization, inadequate safety margins, and instability when using DLQR. By integrating a distance adjustment factor, this research aims to rectify traditional APF limitations. A safety distance model and a sub-target virtual potential field are established to facilitate collision-free path generation for autonomous vehicles. A path tracking system is designed, combining feed-forward control with DLQR. Linearization and discretization of the vehicle’s dynamic state space model, with constraint variables set to minimize control-command costs, aligns with DLQR objectives. The aim is precise steering angle determination for path tracking, negating lateral errors due to external disturbances. A Simulink–CarSim co-simulation platform is utilized for obstacle and speed scenarios, validating the autonomous vehicle’s dynamic hazard avoidance, lane changing, and overtaking capabilities. The refined APF method enhances path safety, smoothness, and stability. Experimental data across three speeds reveal reasonable steering angle and lateral deflection angle variations. The controller ensures stable reference path tracking at 40, 50, and 60 km/h around various obstacles, verifying the controller’s effectiveness and driving stability. Comparative analysis of visual trajectories pre-optimization and post-optimization highlights improvements. Vehicle roll and sideslip angle peaks, roll-angle fluctuation, and front/rear wheel steering vertical support forces are compared with traditional LQR, validating the optimized controller’s enhancement of vehicle performance. Simulation results using MATLAB/Simulink and CarSim demonstrate that the optimized controller reduces steering angles by 5 to 10°, decreases sideslip angles by 3 to 5°, and increases vertical support forces from 1000 to 1450 N, showcasing our algorithm’s superior obstacle avoidance and lane-changing capabilities under dynamic conditions. Full article
(This article belongs to the Special Issue Research on Intelligent Vehicle Path Planning Algorithm)
Show Figures

Figure 1

Figure 1
<p>Principle of autonomous collision avoidance.</p>
Full article ">Figure 2
<p>APF three-dimensional potential field. Figure (<b>a</b>) shows the attractive potential field of the target point. Figure (<b>b</b>) shows the repulsive potential field of the static obstacle. Figure (<b>c</b>) shows the repulsive potential field of the dynamic obstacle.</p>
Full article ">Figure 3
<p>Sub-target virtual potential field.</p>
Full article ">Figure 4
<p>Contour maps of potential fields and collision-free trajectories.</p>
Full article ">Figure 5
<p>Principles of vehicle dynamics.</p>
Full article ">Figure 6
<p>Framework of DLQR tracking control algorithm.</p>
Full article ">Figure 7
<p>Optimizing APF for overtaking and collision avoidance. (<b>a</b>) Changing lanes with combined force; (<b>b</b>) changing lanes to overtake; (<b>c</b>) lane change and passing complete; (<b>d</b>) return to main lane; (<b>e</b>) drive smoothly to the target point.</p>
Full article ">Figure 7 Cont.
<p>Optimizing APF for overtaking and collision avoidance. (<b>a</b>) Changing lanes with combined force; (<b>b</b>) changing lanes to overtake; (<b>c</b>) lane change and passing complete; (<b>d</b>) return to main lane; (<b>e</b>) drive smoothly to the target point.</p>
Full article ">Figure 8
<p>Body dimensions.</p>
Full article ">Figure 9
<p>CarSim parameter configuration and Simulink algorithm module co-simulation platform.</p>
Full article ">Figure 10
<p>Comparison of obstacle avoidance trajectories of artificial potential field method before and after optimization.</p>
Full article ">Figure 11
<p>Simulation analysis of obstacle avoidance in different obstacle scenarios.</p>
Full article ">Figure 12
<p>The results of lateral angle data of vehicle before optimization.</p>
Full article ">Figure 12 Cont.
<p>The results of lateral angle data of vehicle before optimization.</p>
Full article ">Figure 13
<p>The results of lateral angle data of vehicle after optimization.</p>
Full article ">Figure 13 Cont.
<p>The results of lateral angle data of vehicle after optimization.</p>
Full article ">Figure 14
<p>Vehicle lateral slip angle data before optimization.</p>
Full article ">Figure 14 Cont.
<p>Vehicle lateral slip angle data before optimization.</p>
Full article ">Figure 15
<p>Vehicle lateral slip angle data following optimization.</p>
Full article ">Figure 15 Cont.
<p>Vehicle lateral slip angle data following optimization.</p>
Full article ">Figure 16
<p>Numerical analysis of the vertical support force data of the front and rear wheels of the vehicle before optimization.</p>
Full article ">Figure 16 Cont.
<p>Numerical analysis of the vertical support force data of the front and rear wheels of the vehicle before optimization.</p>
Full article ">Figure 17
<p>Numerical analysis of the vertical support force data of the front and rear wheels of the vehicle after optimization.</p>
Full article ">Figure 17 Cont.
<p>Numerical analysis of the vertical support force data of the front and rear wheels of the vehicle after optimization.</p>
Full article ">
16 pages, 25350 KiB  
Article
Eye Tracking and Human Influence Factors’ Impact on Quality of Experience of Mobile Gaming
by Omer Nawaz, Siamak Khatibi, Muhammad Nauman Sheikh and Markus Fiedler
Future Internet 2024, 16(11), 420; https://doi.org/10.3390/fi16110420 - 13 Nov 2024
Viewed by 229
Abstract
Mobile gaming accounts for more than 50% of global online gaming revenue, surpassing console and browser-based gaming. The success of mobile gaming titles depends on optimizing applications for the specific hardware constraints of mobile devices, such as smaller displays and lower computational power, [...] Read more.
Mobile gaming accounts for more than 50% of global online gaming revenue, surpassing console and browser-based gaming. The success of mobile gaming titles depends on optimizing applications for the specific hardware constraints of mobile devices, such as smaller displays and lower computational power, to maximize battery life. Additionally, these applications must dynamically adapt to the variations in network speed inherent in mobile environments. Ultimately, user engagement and satisfaction are critical, necessitating a favorable comparison to browser and console-based gaming experiences. While Quality of Experience (QoE) subjective evaluations through user surveys are the most reliable method for assessing user perception, various factors, termed influence factors (IFs), can affect user ratings of stimulus quality. This study examines human influence factors in mobile gaming, specifically analyzing the impact of user delight towards displayed content and the effect of gaze tracking. Using Pupil Core eye-tracking hardware, we captured user interactions with mobile devices and measured visual attention. Video stimuli from eight popular games were selected, with resolutions of 720p and 1080p and frame rates of 30 and 60 fps. Our results indicate a statistically significant impact of user delight on the MOS for most video stimuli across all games. Additionally, a trend favoring higher frame rates over screen resolution emerged in user ratings. These findings underscore the significance of optimizing mobile gaming experiences by incorporating models that estimate human influence factors to enhance user satisfaction and engagement. Full article
Show Figures

Figure 1

Figure 1
<p>Subjective assessment with Pupil Core.</p>
Full article ">Figure 2
<p>Pupil Core software v3.5.1. (<b>a</b>) Pupil Capture for session recording with fixation detector in the calibrated area; (<b>b</b>) Pupil Player screen for replaying the individual session and exporting data.</p>
Full article ">Figure 3
<p>Frames of video stimuli. (<b>a</b>) Animal Crossing; (<b>b</b>) Counter-Strike 2; (<b>c</b>) Call of Duty; (<b>d</b>) Code Vein; (<b>e</b>) Fortnite; (<b>f</b>) Minecraft; (<b>g</b>) PUBG; (<b>h</b>) Rocket League.</p>
Full article ">Figure 4
<p>MOS of subjective assessment based on delight with 95% CI. (<b>a</b>) MOS_Animal Crossing; (<b>b</b>) MOS_Counter-Strike 2; (<b>c</b>) MOS_Code Vein; (<b>d</b>) MOS_PUBG.</p>
Full article ">Figure 5
<p>Histogram of user gaze.</p>
Full article ">Figure 6
<p>Relative frequency of gaze based on %GoB and %PoW ratings.</p>
Full article ">Figure 7
Full article ">
16 pages, 2285 KiB  
Article
Driving Fatigue Onset and Visual Attention: An Electroencephalography-Driven Analysis of Ocular Behavior in a Driving Simulation Task
by Andrea Giorgi, Gianluca Borghini, Francesca Colaiuda, Stefano Menicocci, Vincenzo Ronca, Alessia Vozzi, Dario Rossi, Pietro Aricò, Rossella Capotorto, Simone Sportiello, Marco Petrelli, Carlo Polidori, Rodrigo Varga, Marteyn Van Gasteren, Fabio Babiloni and Gianluca Di Flumeri
Behav. Sci. 2024, 14(11), 1090; https://doi.org/10.3390/bs14111090 - 13 Nov 2024
Viewed by 487
Abstract
Attentional deficits have tragic consequences on road safety. These deficits are not solely caused by distraction, since they can also arise from other mental impairments such as, most frequently, mental fatigue. Fatigue is among the most prevalent impairing conditions while driving, degrading drivers’ [...] Read more.
Attentional deficits have tragic consequences on road safety. These deficits are not solely caused by distraction, since they can also arise from other mental impairments such as, most frequently, mental fatigue. Fatigue is among the most prevalent impairing conditions while driving, degrading drivers’ cognitive and physical abilities. This issue is particularly relevant for professional drivers, who spend most of their time behind the wheel. While scientific literature already documented the behavioral effects of driving fatigue, most studies have focused on drivers under sleep deprivation or anyhow at severe fatigue degrees, since it is difficult to recognize the onset of fatigue. The present study employed an EEG-driven approach to detect early signs of fatigue in professional drivers during a simulated task, with the aim of studying visual attention as fatigue begins to set in. Short-range and long-range professional drivers were recruited to take part in a 45-min-long simulated driving experiment. Questionnaires were used to validate the experimental protocol. A previously validated EEG index, the MDrow, was adopted as the benchmark measure for identifying the “fatigued” spans. Results of the eye-tracking analysis showed that, when fatigued, professional drivers tended to focus on non-informative portions of the driving environment. This paper presents evidence that an EEG-driven approach can be used to detect the onset of fatigue while driving and to study the related visual attention patterns. It was found that the onset of fatigue did not differentially impact drivers depending on their professional activity (short- vs. long-range delivery). Full article
(This article belongs to the Special Issue Neuroimaging Techniques in the Measurement of Mental Fatigue)
Show Figures

Figure 1

Figure 1
<p>Description of the experimental protocol.</p>
Full article ">Figure 2
<p>The two driving scenarios adopted in this study (<b>left</b>: van drivers, <b>right</b>: truck drivers). In order to reduce the noise in the data, statistical analysis was performed only on the data collected while participants were driving in the longest straight line (circled in red). Blue arrows indicate the direction while driving.</p>
Full article ">Figure 3
<p>Representation of the AoIs designed for both van (<b>left</b>) and truck (<b>right</b>) drivers. Green: Road; Orange: Cockpit; Blue: External Environment; Purple: Cockpit Total (this is not discussed in this paper because of the neglectable amount of attention participants paid to this AoI).</p>
Full article ">Figure 4
<p>Results of questionnaires analysis. Participants perceived higher levels of both sleepiness (<b>a</b>) and fatigue (<b>b</b>). The choice of providing both questionnaires was based on the fact that fatigue and sleepiness might be difficult to distinguish between each other. * <span class="html-italic">p</span> &lt; 0.05; ** = <span class="html-italic">p</span> &lt; 0.01; *** <span class="html-italic">p</span> &lt; 0.001.</p>
Full article ">Figure 5
<p>EEG assessment during the resting state collected at the participants’ arrival and after each driving task. As shown, after the circuit driving task (EO2), participants experienced an increase in fatigue that was found to be further higher after the monotonous driving task (EO3). * <span class="html-italic">p</span> &lt; 0.05.</p>
Full article ">Figure 6
<p>Analysis of ocular behavior during Low vs. ‘High fatigue’ condition. Subfigures (<b>a</b>,<b>b</b>) respectively show Fixation Count and Total Visit Duration. Both these measures decreased when participants were fatigued. ** represents <span class="html-italic">p</span> &lt; 0.01.</p>
Full article ">Figure 7
<p>Analysis of ocular behavior toward External Environment during ‘Low fatigue’ vs. ‘High fatigue’ condition. Fixation Count has been found to decrease when participants were fatigued. ** represents <span class="html-italic">p</span> &lt; 0.01.</p>
Full article ">
17 pages, 1906 KiB  
Article
Advancing Indoor Epidemiological Surveillance: Integrating Real-Time Object Detection and Spatial Analysis for Precise Contact Rate Analysis and Enhanced Public Health Strategies
by Ali Baligh Jahromi, Koorosh Attarian, Ali Asgary and Jianhong Wu
Int. J. Environ. Res. Public Health 2024, 21(11), 1502; https://doi.org/10.3390/ijerph21111502 - 13 Nov 2024
Viewed by 396
Abstract
In response to escalating concerns about the indoor transmission of respiratory diseases, this study introduces a sophisticated software tool engineered to accurately determine contact rates among individuals in enclosed spaces—essential for public health surveillance and disease transmission mitigation. The tool applies YOLOv8, a [...] Read more.
In response to escalating concerns about the indoor transmission of respiratory diseases, this study introduces a sophisticated software tool engineered to accurately determine contact rates among individuals in enclosed spaces—essential for public health surveillance and disease transmission mitigation. The tool applies YOLOv8, a cutting-edge deep learning model that enables precise individual detection and real-time tracking from video streams. An innovative feature of this system is its dynamic circular buffer zones, coupled with an advanced 2D projective transformation to accurately overlay video data coordinates onto a digital layout of the physical environment. By analyzing the overlap of these buffer zones and incorporating detailed heatmap visualizations, the software provides an in-depth quantification of contact instances and spatial contact patterns, marking an advancement over traditional contact tracing and contact counting methods. These enhancements not only improve the accuracy and speed of data analysis but also furnish public health officials with a comprehensive framework to develop more effective non-pharmaceutical infection control strategies. This research signifies a crucial evolution in epidemiological tools, transitioning from manual, simulation, and survey-based tracking methods to automated, real time, and precision-driven technologies that integrate advanced visual analytics to better understand and manage disease transmission in indoor settings. Full article
Show Figures

Figure 1

Figure 1
<p>Flowchart; steps include initialization, object detection using YOLOv8, real-time human tracking, dynamic buffer zones, spatial analysis, people counting and density analysis, and data handling and visualization.</p>
Full article ">Figure 2
<p>Detecting and tracking individuals in indoor environment. Count of 5 individuals each with their track line (green line) and track id (yellow numbers).</p>
Full article ">Figure 3
<p>Transformation of occupants in 2D floor plan.</p>
Full article ">Figure 4
<p>Interaction duration analysis across tracked individuals.</p>
Full article ">Figure 5
<p>Comparative spatial interaction heatmaps depicting density and movement patterns at time 1 (second) and time 31 (second) during our experiment.</p>
Full article ">
13 pages, 1246 KiB  
Article
Sprint Management in Agile Approach: Progress and Velocity Evaluation Applying Machine Learning
by Yadira Jazmín Pérez Castillo, Sandra Dinora Orantes Jiménez and Patricio Orlando Letelier Torres
Information 2024, 15(11), 726; https://doi.org/10.3390/info15110726 - 12 Nov 2024
Viewed by 360
Abstract
Nowadays, technology plays a fundamental role in data collection and analysis, which are essential for decision-making in various fields. Agile methodologies have transformed project management by focusing on continuous delivery and adaptation to change. In multiple project management, assessing the progress and pace [...] Read more.
Nowadays, technology plays a fundamental role in data collection and analysis, which are essential for decision-making in various fields. Agile methodologies have transformed project management by focusing on continuous delivery and adaptation to change. In multiple project management, assessing the progress and pace of work in Sprints is particularly important. In this work, a data model was developed to evaluate the progress and pace of work, based on the visual interpretation of numerical data from certain graphs that allow tracking, such as the Burndown chart. Additionally, experiments with machine learning algorithms were carried out to validate the effectiveness and potential improvements facilitated by this dataset development. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Burndown chart (in hours).</p>
Full article ">Figure 2
<p>Completed vs. not completed work (in hours).</p>
Full article ">Figure 3
<p>Access to the API method for obtaining the burndown chart.</p>
Full article ">Figure 4
<p>Access to the API Method for the completed vs. not completed UTs chart.</p>
Full article ">
6 pages, 1102 KiB  
Proceeding Paper
Theoretical Study of the Effect of Weather Conditions on Vehicle Aerodynamic Properties
by Brúnó Péter and István Lakatos
Eng. Proc. 2024, 79(1), 83; https://doi.org/10.3390/engproc2024079083 - 12 Nov 2024
Viewed by 137
Abstract
One of the most widely researched fields within the automotive industry is the effect vehicles place on the environment. To achieve a sustainable transport system, reducing the pollution of vehicles is an essential issue. The aim of this paper is to examine how [...] Read more.
One of the most widely researched fields within the automotive industry is the effect vehicles place on the environment. To achieve a sustainable transport system, reducing the pollution of vehicles is an essential issue. The aim of this paper is to examine how weather conditions influence a vehicle’s operation. The study examines potential methods to evaluate the effect of different weather conditions on the aerodynamic parameters of a vehicle. Aerodynamic properties can be measured with the help of computational fluid dynamics (CFD), a wind tunnel and test-track measurements. On-board diagnostics are also examined to collect data on aerodynamics. These methods can monitor several parameters to measure and visualize the effects of weather conditions. The theoretical background to the related aerodynamic parameters is summarized. Full article
Show Figures

Figure 1

Figure 1
<p>Operating principle of wind tunnel.</p>
Full article ">Figure 2
<p>Measurement tools and measurable weather condition parameters in matrix form showing the effectiveness of the pairs: the darker the cell color, the more efficient the method.</p>
Full article ">Figure 3
<p>(<b>a</b>) Aerodynamic force acting on a unit surface. (<b>b</b>) Components of aerodynamic forces and moments.</p>
Full article ">Figure 4
<p>Explanation of projection area (gray surface) of a vehicle; in this case, that of the Ahmed body (green body).</p>
Full article ">
7 pages, 1148 KiB  
Proceeding Paper
A Novel Method to Improve the Efficiency and Performance of Cloud-Based Visual Simultaneous Localization and Mapping
by Omar M. Salih, Hussam Rostum and József Vásárhelyi
Eng. Proc. 2024, 79(1), 78; https://doi.org/10.3390/engproc2024079078 - 11 Nov 2024
Viewed by 221
Abstract
Since Visual Simultaneous Localization and Mapping (VSLAM) inherently requires intensive computational operations and consumes many hardware resources, these limitations pose challenges to implementing the entire VSLAM architecture within limited processing power and battery capacity. This paper proposes a novel solution to improve the [...] Read more.
Since Visual Simultaneous Localization and Mapping (VSLAM) inherently requires intensive computational operations and consumes many hardware resources, these limitations pose challenges to implementing the entire VSLAM architecture within limited processing power and battery capacity. This paper proposes a novel solution to improve the efficiency and performance of exchanging data between the unmanned aerial vehicle (UAV) and the cloud server. First, an adaptive ORB (oriented FAST and rotated BRIEF) method is proposed for precise tracking, mapping, and re-localization. Second, efficient visual data encoding and decoding methods are proposed for exchanging the data between the edge device and the UAV. The results show an improvement in the trajectory RMSE and accurate tracking using the adaptive ORB-SLAM. Furthermore, the proposed visual data encoding and decoding showed an outstanding performance compared with the most used standard JPEG-based system over high quantization ratios. Full article
Show Figures

Figure 1

Figure 1
<p>Adaptive ORB-SLAM architecture.</p>
Full article ">Figure 2
<p>The proposed visual data algorithms for (<b>a</b>) encoding and (<b>b</b>) decoding.</p>
Full article ">Figure 3
<p>The graph of the comparison between the estimated trajectory and the actual camera trajectory.</p>
Full article ">
20 pages, 3876 KiB  
Article
An IoT-Based Framework for Automated Assessing and Reporting of Light Sensitivities in Children with Autism Spectrum Disorder
by Dundi Umamaheswara Reddy, Kanaparthi V. Phani Kumar, Bandaru Ramakrishna and Ganapathy Sankar Umaiorubagam
Sensors 2024, 24(22), 7184; https://doi.org/10.3390/s24227184 - 9 Nov 2024
Viewed by 340
Abstract
Identification of light sensitivities, manifesting either as hyper-sensitive (over-stimulating) or hypo-sensitive (under-stimulating) in children with autism spectrum disorder (ASD), is crucial for the development of personalized sensory environments and therapeutic strategies. Traditional methods for identifying light sensitivities often depend on subjective assessments and [...] Read more.
Identification of light sensitivities, manifesting either as hyper-sensitive (over-stimulating) or hypo-sensitive (under-stimulating) in children with autism spectrum disorder (ASD), is crucial for the development of personalized sensory environments and therapeutic strategies. Traditional methods for identifying light sensitivities often depend on subjective assessments and manual video coding methods, which are time-consuming, and very keen observations are required to capture the diverse sensory responses of children with ASD. This can lead to challenges for clinical practitioners in addressing individual sensory needs effectively. The primary objective of this work is to develop an automated system using Internet of Things (IoT), computer vision, and data mining techniques for assessing visual sensitivities specifically associated with light (color and illumination). For this purpose, an Internet of Things (IoT)-based light sensitivities assessing system (IoT-LSAS) was designed and developed using a visual stimulating device, a bubble tube (BT). The IoT-LSAS integrates various electronic modules for (i) generating colored visual stimuli with different illumination levels and (ii) capturing images to identify children’s emotional responses during sensory stimulation sessions. The system is designed to operate in two different modes: a child control mode (CCM) and a system control mode (SCM). Each mode uses a distinct approach for assessing light sensitivities, where CCM uses a preference-based approach, and SCM uses an emotional response tracking approach. The system was tested on a sample of 20 children with ASD, and the results showed that the IoT-LSAS effectively identified light sensitivities, with a 95% agreement rate in the CCM and a 90% agreement rate in the SCM when compared to the practitioner’s assessment report. These findings suggest that the IoT-LSAS can be used as an alternative to traditional assessment methods for diagnosing light sensitivities in children with ASD, significantly reducing the practitioner’s time required for diagnosis. Full article
Show Figures

Figure 1

Figure 1
<p>Architecture of IoT LSAS.</p>
Full article ">Figure 2
<p>Design of bubble tube used to generate colored visual stimuli.</p>
Full article ">Figure 3
<p>Illumination for different intensity levels measured using LUX meter.</p>
Full article ">Figure 4
<p>Customized control panel for active intervention, consisting of five color control illuminated push buttons for selection of colored illumination and two sets of push buttons for intensity and bubble speed control.</p>
Full article ">Figure 5
<p>Six distinct facial emotions of children with ASD (images shown are taken from openly available autism children emotions dataset from kaggle).</p>
Full article ">Figure 6
<p>Plot showing the performance of the trained model for 50 epochs.</p>
Full article ">Figure 7
<p>Confusion matrix showing the correct and wrong predictions for each class.</p>
Full article ">Figure 8
<p>Logged data record and respective fields in two modes: (<b>a</b>) CCM and (<b>b</b>) SCM.</p>
Full article ">Figure 9
<p>Child interaction with BT (<b>a</b>) CCM (<b>b</b>) SCM (the child’s face has been masked due to ethical concerns).</p>
Full article ">Figure 10
<p>IoT-LSAS CCM report.</p>
Full article ">Figure 11
<p>IoT LSAS SCM report.</p>
Full article ">
21 pages, 3490 KiB  
Review
Mapping the Landscape of Biomechanics Research in Stroke Neurorehabilitation: A Bibliometric Perspective
by Anna Tsiakiri, Spyridon Plakias, Georgia Karakitsiou, Alexandrina Nikova, Foteini Christidi, Christos Kokkotis, Georgios Giarmatzis, Georgia Tsakni, Ioanna-Giannoula Katsouri, Sarris Dimitrios, Konstantinos Vadikolias, Nikolaos Aggelousis and Pinelopi Vlotinou
Biomechanics 2024, 4(4), 664-684; https://doi.org/10.3390/biomechanics4040048 - 8 Nov 2024
Viewed by 799
Abstract
Background/Objectives: The incorporation of biomechanics into stroke neurorehabilitation may serve to strengthen the effectiveness of rehabilitation strategies by increasing our understanding of human movement and recovery processes. The present bibliometric analysis of biomechanics research in stroke neurorehabilitation is conducted with the objectives of [...] Read more.
Background/Objectives: The incorporation of biomechanics into stroke neurorehabilitation may serve to strengthen the effectiveness of rehabilitation strategies by increasing our understanding of human movement and recovery processes. The present bibliometric analysis of biomechanics research in stroke neurorehabilitation is conducted with the objectives of identifying influential studies, key trends, and emerging research areas that would inform future research and clinical practice. Methods: A comprehensive bibliometric analysis was performed using documents retrieved from the Scopus database on 6 August 2024. The analysis included performance metrics such as publication counts and citation analysis, as well as science mapping techniques, including co-authorship, bibliographic coupling, co-citation, and keyword co-occurrence analyses. Data visualization tools such as VOSviewer and Power BI were utilized to map the bibliometric networks and trends. Results: An overabundance of recent work has yielded substantial advancements in the application of brain–computer interfaces to electroencephalography and functional neuroimaging during stroke neurorehabilitation., which translate neural activity into control signals for external devices and provide critical insights into the biomechanics of motor recovery by enabling precise tracking and feedback of movement during rehabilitation. A sampling of the most impactful contributors and influential publications identified two leading countries of contribution: the United States and China. Three prominent research topic clusters were also noted: biomechanical evaluation and movement analysis, neurorehabilitation and robotics, and motor recovery and functional rehabilitation. Conclusions: The findings underscore the growing integration of advanced technologies such as robotics, neuroimaging, and virtual reality into neurorehabilitation practices. These innovations are poised to enhance the precision and effectiveness of therapeutic interventions. Future research should focus on the long-term impacts of these technologies and the development of accessible, cost-effective tools for clinical use. The integration of multidisciplinary approaches will be crucial in optimizing patient outcomes and improving the quality of life for stroke survivors. Full article
(This article belongs to the Section Injury Biomechanics and Rehabilitation)
Show Figures

Figure 1

Figure 1
<p>Visualization of the bibliometric data extraction process from the Scopus database.</p>
Full article ">Figure 2
<p>Annual number of publications on stroke neurorehabilitation biomechanics.</p>
Full article ">Figure 3
<p>Co-authorship network map based on countries as the unit of analysis. Each node represents a country, with node size reflecting the number of publications and line thickness indicating the strength of collaboration between countries. The color codes represent different clusters of countries that frequently collaborate on biomechanics and stroke neurorehabilitation research. Countries within the same color group have stronger co-authorship links with one another.</p>
Full article ">Figure 4
<p>Bibliographic coupling analysis using sources as the unit of analysis. The node size indicates the influence of each source, based on the number of shared references. The color codes represent different clusters of sources that share similar citation patterns. Journals or sources within the same color group have a higher degree of shared references, suggesting thematic or disciplinary similarity.</p>
Full article ">Figure 5
<p>Co-citation analysis focused on authors within biomechanics and stroke neurorehabilitation. The size of each node represents the frequency of an author’s co-citation with others. The color codes represent clusters of authors whose works are frequently cited together, indicating that their research is related or falls within a similar subfield.</p>
Full article ">Figure 6
<p>Co-occurrence analysis of author keywords, grouped into three distinct clusters.</p>
Full article ">Figure 7
<p>Co-occurrence network of author keywords in biomechanics and stroke rehabilitation research. Each node represents a keyword, with node size indicating the frequency of its occurrence. The thickness of the links reflects the strength of co-occurrence between keywords. The color codes group keywords into clusters that frequently co-occur in the same publications, representing distinct thematic areas within the research landscape.</p>
Full article ">
18 pages, 9647 KiB  
Article
Privacy-Preserving Live Video Analytics for Drones via Edge Computing
by Piyush Nagasubramaniam, Chen Wu, Yuanyi Sun, Neeraj Karamchandani, Sencun Zhu and Yongzhong He
Appl. Sci. 2024, 14(22), 10254; https://doi.org/10.3390/app142210254 - 7 Nov 2024
Viewed by 553
Abstract
The use of lightweight drones has surged in recent years across both personal and commercial applications, necessitating the ability to conduct live video analytics on drones with limited computational resources. While edge computing offers a solution to the throughput bottleneck, it also opens [...] Read more.
The use of lightweight drones has surged in recent years across both personal and commercial applications, necessitating the ability to conduct live video analytics on drones with limited computational resources. While edge computing offers a solution to the throughput bottleneck, it also opens the door to potential privacy invasions by exposing sensitive visual data to risks. In this work, we present a lightweight, privacy-preserving framework designed for real-time video analytics. By integrating a novel split-model architecture tailored for distributed deep learning through edge computing, our approach strikes a balance between operational efficiency and privacy. We provide comprehensive evaluations on privacy, object detection, latency, bandwidth usage, and object-tracking performance for our proposed privacy-preserving model. Full article
Show Figures

Figure 1

Figure 1
<p>Simplified diagram of core system logistics, showing how the drone receives an input image frame through its camera. The onboard model extracts the feature maps and sends them in a non-invertible form to the cloudlet. The cloudlet sends the predictions (i.e., the location of the bounding box for the human object) to the drone without needing to access the original video frames.</p>
Full article ">Figure 2
<p>The onboard <span class="html-italic">Model Backbone</span> module extracts feature maps. The <span class="html-italic">Noise Module</span> injects noise from <math display="inline"><semantics> <mrow> <mi mathvariant="script">N</mi> <mo>(</mo> <mi>μ</mi> <mo>,</mo> <msup> <mi>σ</mi> <mn>2</mn> </msup> <mo>)</mo> </mrow> </semantics></math>, and the <span class="html-italic">Selector Module</span> chooses a subset of the feature maps. The drone sends the processed feature maps and additional metadata to the cloudlet for further processing.</p>
Full article ">Figure 3
<p>Simplified attack diagram exploiting unprotected features.</p>
Full article ">Figure 4
<p>Comparison without our proposed scheme. The first column shows original images from the PennFudan dataset. The second column shows images reconstructed using unprotected feature maps (without our scheme), and the third column shows images reconstructed with an annulment factor of <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>0.3</mn> </mrow> </semantics></math>, noise variance of <math display="inline"><semantics> <mrow> <msup> <mi>σ</mi> <mn>2</mn> </msup> <mo>=</mo> <mn>0.4</mn> </mrow> </semantics></math>, and mean of <math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>. The annulment value used is based on a Gaussian distribution (i.e., <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>∼</mo> <mi mathvariant="script">N</mi> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </semantics></math>).</p>
Full article ">Figure 5
<p>Training methodology for the adversarial decoder.</p>
Full article ">Figure 6
<p>Reconstruction quality (SSIM, MS-SSIM) vs. annulment factor (<math display="inline"><semantics> <mi>λ</mi> </semantics></math>) for a victim MobileNetV3 backbone model on the CelebA dataset.</p>
Full article ">Figure 7
<p>Reconstruction quality (SSIM, MS-SSIM) vs. noise variance (<math display="inline"><semantics> <msup> <mi>σ</mi> <mn>2</mn> </msup> </semantics></math>) for a victim MobileNetV3 backbone model on the CelebA dataset.</p>
Full article ">Figure 8
<p>Comparison of reconstruction quality with and without our proposed scheme. The first column shows original images taken from the CelebA dataset, the second column shows the images reconstructed using the unprotected feature maps without using our scheme, and the third column shows the images reconstructed using an annulment factor of <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>0.3</mn> </mrow> </semantics></math>, noise variance of <math display="inline"><semantics> <mrow> <msup> <mi>σ</mi> <mn>2</mn> </msup> <mo>=</mo> <mn>0.4</mn> </mrow> </semantics></math>, and mean of <math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>. The annulment value used is based on a Gaussian distribution (i.e., <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>∼</mo> <mi mathvariant="script">N</mi> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </semantics></math>).</p>
Full article ">Figure 9
<p>Comparison of privacy protection offered by different annulment values (<span class="html-italic">k</span>) for the variance threshold (<math display="inline"><semantics> <mrow> <msup> <mi>σ</mi> <mn>2</mn> </msup> <mo>=</mo> <mn>0.4</mn> </mrow> </semantics></math>), annulment factor (<math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>0.3</mn> </mrow> </semantics></math>), and noise mean (<math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>) for the CelebA dataset.</p>
Full article ">Figure 10
<p>Comparison between SSD-MobileNetV3 (first column) and Secure SSDLite (ours, second column) bounding box predictions for samples from the PennFudan dataset. The yellow bounding boxes are the ground truth and the red boxes are the predicted boxes. The third column blurs the sensitive regions based on the predictions from the Secure SSDLite model with <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.11</mn> </mrow> </semantics></math>. Subfigures 1(a)–3(c) are examples of predicted bounding boxes with varying subjects and sizes.</p>
Full article ">Figure 11
<p>Performance of Secure SSDLite (AP@0.5, AP, AR) vs. annulment factor <math display="inline"><semantics> <mi>λ</mi> </semantics></math> for different variance thresholds. Here, the variance threshold refers to the variance, <math display="inline"><semantics> <msup> <mi>σ</mi> <mn>2</mn> </msup> </semantics></math>, of the Gaussian distribution from which the additive noise is sampled. Similarly, the mean, <math display="inline"><semantics> <mi>μ</mi> </semantics></math>, is the mean of the aforementioned Gaussian distribution. (<b>a</b>) Performance of Secure SSDLite (AP@0.5, AP, AR) vs. annulment factor <math display="inline"><semantics> <mi>λ</mi> </semantics></math> for variance threshold <math display="inline"><semantics> <mrow> <msup> <mi>σ</mi> <mn>2</mn> </msup> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>. (<b>b</b>) Performance of Secure SSDLite (AP@0.5, AP, AR) vs. annulment factor <math display="inline"><semantics> <mi>λ</mi> </semantics></math> for variance threshold <math display="inline"><semantics> <mrow> <msup> <mi>σ</mi> <mn>2</mn> </msup> <mo>=</mo> <mn>0.2</mn> </mrow> </semantics></math>. (<b>c</b>) Performance of Secure SSDLite (AP@0.5, AP, AR) vs. annulment factor <math display="inline"><semantics> <mi>λ</mi> </semantics></math> for variance threshold <math display="inline"><semantics> <mrow> <msup> <mi>σ</mi> <mn>2</mn> </msup> <mo>=</mo> <mn>0.4</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 12
<p>Performance of Secure SSDLite (AP, AR, AP@0.5) vs. annulment value (<span class="html-italic">k</span>) for variance threshold <math display="inline"><semantics> <mrow> <msup> <mi>σ</mi> <mn>2</mn> </msup> <mo>=</mo> <mn>0.4</mn> </mrow> </semantics></math>.</p>
Full article ">
14 pages, 4965 KiB  
Article
Effect of Layer Thickness on the Practical Adhesion of Borided Monel 400 Alloy
by Francisco Javier Alfonso-Reyes, José Martínez-Trinidad, Luis Alfonso Moreno-Pacheco, Osvaldo Quintana-Hernández, Wilbert Wong-Ángel and Ricardo Andrés García-León
Coatings 2024, 14(11), 1414; https://doi.org/10.3390/coatings14111414 - 7 Nov 2024
Viewed by 434
Abstract
This study presents new results on the practical adhesion behavior of a boride layer formed on Monel 400 alloy, developed using the powder-pack boriding (PPBP) at 1223 K for 2, 4, and 6 h of exposure times, obtaining layer thicknesses from approximately 7.9 [...] Read more.
This study presents new results on the practical adhesion behavior of a boride layer formed on Monel 400 alloy, developed using the powder-pack boriding (PPBP) at 1223 K for 2, 4, and 6 h of exposure times, obtaining layer thicknesses from approximately 7.9 to 23.8 µm. The nickel boride layers were characterized using optical microscopy, Berkovich nanoindentation, X-ray diffraction (XRD), and scanning electron microscopy (SEM) to determine microstructure, hardness distribution, and failure mechanisms over the worn tracks. Scratch tests were conducted on the borided Monel 400 alloy according to the ASTM C-1624 standard, applying a progressively increasing normal load from 1 to 85 N using a Rockwell-C diamond indenter, revealing that critical loads (LC1, LC2, and LC3) increased with layer thickness. The tests monitored the coefficient of friction and residual stress in real time. Critical loads were determined based on the correlation between the normal force and visual inspection of the worn surface, identifying cracks (cohesive failure) or detachment (adhesive failure). The results exposed those cohesive failures that appeared as Hertzian cracks, while adhesive failures were chipping and delamination, with critical loads reaching up to 49.0 N for the 6 h borided samples. Also, the results indicated that critical loads increased with greater layer thickness. The boride layer hardness was approximately 12 ± 0.3 GPa, ~4.0 times greater than the substrate, and Young’s modulus reached 268 ± 15 GPa. These findings underscore that PPBP significantly enhances surface mechanical properties, demonstrating the potential for applications demanding high wear resistance and strong layer adhesion. Full article
(This article belongs to the Special Issue Enhanced Mechanical Properties of Metals by Surface Treatments)
Show Figures

Figure 1

Figure 1
<p>Schematic representation of the scratch test.</p>
Full article ">Figure 2
<p>Cross-sectional views of the borided Monel 400 obtained at 1223 K with exposure times of (<b>a</b>) 2 h, (<b>b</b>) 4 h, and (<b>c</b>) 6 h.</p>
Full article ">Figure 3
<p>XRD patterns obtained at the surface of borided Monel 400 using scan normal procedure. The boriding condition was 1223 K: (<b>a</b>) 2 h, (<b>b</b>) 4 h, and (<b>c</b>) 6 h.</p>
Full article ">Figure 4
<p>Profiles obtained on the cross-section of the nickel boride layer–substrate system: (<b>a</b>) Hardness (H) and (<b>b</b>) Young’s modulus (E).</p>
Full article ">Figure 4 Cont.
<p>Profiles obtained on the cross-section of the nickel boride layer–substrate system: (<b>a</b>) Hardness (H) and (<b>b</b>) Young’s modulus (E).</p>
Full article ">Figure 5
<p>Load–unload curves obtained by nanoindentation at 6 h with 1223 K.</p>
Full article ">Figure 6
<p>Residual stress distribution across the length of the nickel boride layer–substrate system.</p>
Full article ">Figure 7
<p>Scratch track length of the Monel 400 borided obtained at 2 h. (<b>a</b>) COF behavior, and (<b>b</b>) normal force (NF) and the residual depth (Rd).</p>
Full article ">Figure 8
<p>Scratch track length of the Monel 400 borided obtained at 4 h. (<b>a</b>) COF behavior, and (<b>b</b>) normal force (NF) and the residual depth (Rd).</p>
Full article ">Figure 9
<p>Scratch track length of the Monel 400 borided obtained at 6 h. (<b>a</b>) COF behavior, and (<b>b</b>) normal force (NF) and the residual depth (Rd).</p>
Full article ">Figure 10
<p>Failure mechanisms obtained over the worn surface for nickel boride layer–substrate system obtained at 1223 K with 2 h.</p>
Full article ">Figure 11
<p>Failure mechanisms obtained over the worn surface for nickel boride layer–substrate system obtained at 1223 K with 4 h.</p>
Full article ">Figure 12
<p>Failure mechanisms obtained over the worn surface for nickel boride layer–substrate system obtained at 1223 K with 6 h.</p>
Full article ">
20 pages, 5993 KiB  
Article
Quantification of Visual Attention by Using Eye-Tracking Technology for Soundscape Assessment Through Physiological Response
by Hyun In Jo and Jin Yong Jeon
Int. J. Environ. Res. Public Health 2024, 21(11), 1478; https://doi.org/10.3390/ijerph21111478 - 7 Nov 2024
Viewed by 414
Abstract
Because soundscapes affect human health and comfort, methodologies for evaluating them through physiological responses have attracted considerable attention. In this study, we proposed a novel method for evaluating visual attention by using eye-tracking technology to objectively assess soundscape perception. The study incorporated questionnaire [...] Read more.
Because soundscapes affect human health and comfort, methodologies for evaluating them through physiological responses have attracted considerable attention. In this study, we proposed a novel method for evaluating visual attention by using eye-tracking technology to objectively assess soundscape perception. The study incorporated questionnaire surveys and physiological measurements focusing on visual attention responses. Results from the questionnaire indicated that perceptions of vehicles and the sky were 6% and 26% more sensitive, respectively, whereas perceptions of vegetation, based on physiological responses, were approximately 3% to 50% more sensitive. The soundscape quality prediction model indicates that the proposed methodology can complement conventional questionnaire-based models and provide a nuanced interpretation of eventfulness relationships. Additionally, the visual attention quantification technique enhanced the restoration responses of questionnaire-based methods by 1–2%. The results of this study are significant because the study introduces a novel methodology for quantifying visual attention, which can be used as a supplementary tool for physiological responses in soundscape research. The proposed method can uncover novel mechanisms of human perception of soundscapes that may not be captured by questionnaires, providing insights for future research in soundscape evaluation through physiological measurements. Full article
Show Figures

Figure 1

Figure 1
<p>Nine evaluation sites: (<b>a</b>) sky view, (<b>b</b>) stitched monoscopic 360° original view, (<b>c</b>) color layers of static factors [<a href="#B45-ijerph-21-01478" class="html-bibr">45</a>]; and (<b>d</b>) moving-object detection; A–C urban areas, D–F waterfront areas, and G–I green areas.</p>
Full article ">Figure 2
<p>Study procedure and eye-tracking technique concepts for the detection of static and dynamic objects [<a href="#B9-ijerph-21-01478" class="html-bibr">9</a>].</p>
Full article ">Figure 3
<p>Visual attention responses obtained using eye-tracking technology: (<b>a</b>) number of fixations; (<b>b</b>) duration of fixations; (<b>c</b>) percentage of fixation number; (<b>d</b>) percentage of the fixation time; and (<b>e</b>) percentages of visual dominance based on questionnaire responses.</p>
Full article ">Figure 4
<p>Soundscape perception results: (<b>a</b>) sound source identification; (<b>b</b>) perceived affective quality; and (<b>c</b>) overall soundscape quality.</p>
Full article ">Figure 5
<p>Psychological restoration responses for various soundscape experiences.</p>
Full article ">
21 pages, 4004 KiB  
Article
Online Reviews Meet Visual Attention: A Study on Consumer Patterns in Advertising, Analyzing Customer Satisfaction, Visual Engagement, and Purchase Intention
by Aura Lydia Riswanto, Sujin Ha, Sangho Lee and Mahnwoo Kwon
J. Theor. Appl. Electron. Commer. Res. 2024, 19(4), 3102-3122; https://doi.org/10.3390/jtaer19040150 - 6 Nov 2024
Viewed by 890
Abstract
This study aims to bridge the gap between traditional consumer behavior analysis and modern techniques by integrating big data analysis, eye-tracking technology, and survey methods. The researchers considered that understanding consumer behavior is crucial for creating effective advertisements in the digital age. Initially, [...] Read more.
This study aims to bridge the gap between traditional consumer behavior analysis and modern techniques by integrating big data analysis, eye-tracking technology, and survey methods. The researchers considered that understanding consumer behavior is crucial for creating effective advertisements in the digital age. Initially, a big data analysis was performed to identify significant clusters of consumer sentiment from online reviews generated during a recent seasonal promotional campaign. The key factors were identified and grouped into the “Product”, “Model”, “Promo”, and “Effect” categories. Using these clusters as a foundation, an eye-tracking analysis measured visual attention metrics such as the fixation duration and count to understand how the participants engaged with the different advertisement content. Subsequently, a survey assessed the same participants’ purchase intentions and preferences related to the identified clusters. The results showed that the sentiment clusters related to products, promotions, and effects positively impacted the customer satisfaction. The eye-tracking data revealed that advertisements featuring products and models garnered the most visual attention, while the survey results indicated that promotional content significantly influenced the purchase intentions. This multi-step approach delivers an in-depth understanding of the factors that affect customer satisfaction and decision-making, providing valuable information for optimizing marketing strategies in the Korean skincare market. The findings emphasize the importance of integrating consumer sentiment analysis with visual engagement metrics to develop more effective and compelling marketing campaigns. Full article
Show Figures

Figure 1

Figure 1
<p>Research processes.</p>
Full article ">Figure 2
<p>Top 5 skincare products from Olive Young Summer Sale 2024. Source: <a href="http://www.oliveyoung.co.kr" target="_blank">www.oliveyoung.co.kr</a>, accessed on 7 June 2024.</p>
Full article ">Figure 3
<p>Participant using eye-tracking machine.</p>
Full article ">Figure 4
<p>Network visualization.</p>
Full article ">Figure 5
<p>Cluster analysis result.</p>
Full article ">Figure 6
<p>Heat map analysis results.</p>
Full article ">
Back to TopTop