Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (282)

Search Parameters:
Keywords = media metrics

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 1700 KiB  
Article
#WhatIEatinaDay: The Quality, Accuracy, and Engagement of Nutrition Content on TikTok
by Michelle Zeng, Jacqueline Grgurevic, Rayan Diyab and Rajshri Roy
Nutrients 2025, 17(5), 781; https://doi.org/10.3390/nu17050781 - 24 Feb 2025
Abstract
Background: Social media platforms such as TikTok are significant sources of nutrition information for adolescents and young adults, who are vulnerable to unregulated, algorithm-driven content. This often spreads nutrition misinformation, impacting adolescent and young adult health and dietary behaviors. Objectives: While previous research [...] Read more.
Background: Social media platforms such as TikTok are significant sources of nutrition information for adolescents and young adults, who are vulnerable to unregulated, algorithm-driven content. This often spreads nutrition misinformation, impacting adolescent and young adult health and dietary behaviors. Objectives: While previous research has explored misinformation on other platforms, TikTok remains underexamined, so this study aimed at evaluating the landscape of nutrition-related content on TikTok. Methods: This study evaluated TikTok nutrition-related content by (1) identifying common nutrition topics and content creator types; (2) assessing the quality and accuracy of content using evidence-based frameworks, and (3) analyzing engagement metrics such as likes, comments, and shares. Results: The most common creators were health and wellness influencers (32%) and fitness creators (18%). Recipes (31%) and weight loss (34%) dominated the list of topics. When evaluating TikTok posts for quality, 82% of applicable posts lacked transparent advertising, 77% failed to disclose conflicts of interest, 63% promoted stereotypical attitudes, 55% did not provide evidence-based information, 75% lacked balanced and accurate content, and 90% failed to point out the risk and benefits of the advice presented. A total of 36% of posts were considered completely accurate, while 24% were mostly inaccurate, and 18% completely inaccurate. No statistical significance was associated between the level of accuracy or evidence and engagement metrics (p > 0.05). Conclusions: TikTok prioritizes engagement over accuracy, exposing adolescents to harmful nutrition misinformation. Stricter moderation and evidence-based nutrition content are essential to protect adolescent and young adult health. Future research should explore interventions to reduce the impact of misinformation on adolescent dietary behaviors and mental well-being. Full article
(This article belongs to the Special Issue The Impact of Social Media on Eating Behavior)
Show Figures

Figure 1

Figure 1
<p>PRISMA flowchart illustrating screening of sample TikTok posts.</p>
Full article ">Figure 2
<p>Quality of nutrition-related TikTok posts as defined by the Social Media Evaluation Checklist [<a href="#B26-nutrients-17-00781" class="html-bibr">26</a>].</p>
Full article ">Figure 3
<p>Distribution of nutrition topics in nutrition-related TikTok posts by levels of (<b>a</b>) accuracy and (<b>b</b>) evidence.</p>
Full article ">Figure 4
<p>Distribution of content creator types publishing nutrition-related TikTok posts by levels of (<b>a</b>) accuracy and (<b>b</b>) evidence.</p>
Full article ">
26 pages, 18654 KiB  
Article
A Study of MANET Routing Protocols in Heterogeneous Networks: A Review and Performance Comparison
by Nurul I. Sarkar and Md Jahan Ali
Electronics 2025, 14(5), 872; https://doi.org/10.3390/electronics14050872 - 23 Feb 2025
Abstract
Mobile ad hoc networks (MANETs) are becoming a popular networking technology as they can easily be set up and provide communication support on the go. These networks can be used in application areas, such as battlefields and disaster relief operations, where infrastructure networks [...] Read more.
Mobile ad hoc networks (MANETs) are becoming a popular networking technology as they can easily be set up and provide communication support on the go. These networks can be used in application areas, such as battlefields and disaster relief operations, where infrastructure networks are not available. Like media access control protocols, MANET routing protocols can also play an important role in determining network capacity and system performance. Research on the impact of heterogeneous nodes in terms of MANET performance is required for proper deployment of such systems. While MANET routing protocols have been studied and reported extensively in the networking literature, the performance of heterogeneous nodes/devices in terms of system performance has not been fully explored yet. The main objective of this paper is to review and compare the performance of four selected MANET routing protocols (AODV, OLSR, BATMAN and DYMO) in a heterogeneous MANET setting. We consider three different types of nodes in the MANET routing performance study, namely PDAs (fixed nodes with no mobility), laptops (low-mobility nodes) and mobile phones (high-mobility nodes). We measure the QoS metrics, such as the end-to-end delays, throughput, and packet delivery ratios, using the OMNeT++-network simulator. The findings reported in this paper provide some insights into MANET routing performance issues and challenges that can help network researchers and engineers to contribute further toward developing next-generation wireless networks capable of operating under heterogeneous networking constraints. Full article
(This article belongs to the Special Issue Multimedia in Radio Communication and Teleinformatics)
Show Figures

Figure 1

Figure 1
<p>Classification of MANET routing protocols.</p>
Full article ">Figure 2
<p>The network model comprises PDAs, laptops, and mobile phones.</p>
Full article ">Figure 3
<p>End-to-end delays for the AODV routing protocol.</p>
Full article ">Figure 4
<p>End-to-end delays for the BATMAN routing protocol.</p>
Full article ">Figure 5
<p>End-to-end delays for the DYMO routing protocol.</p>
Full article ">Figure 6
<p>End-to-end delays for the OLSR routing protocol.</p>
Full article ">Figure 7
<p>AODV’s throughput for laptop nodes.</p>
Full article ">Figure 8
<p>AODV’s throughput for mobile nodes.</p>
Full article ">Figure 9
<p>AODV’s throughput for fixed nodes.</p>
Full article ">Figure 10
<p>BATMAN’s throughput for laptop nodes (low mobility).</p>
Full article ">Figure 11
<p>BATMAN’s throughput for fixed nodes.</p>
Full article ">Figure 12
<p>BATMAN’s throughput for mobile nodes.</p>
Full article ">Figure 13
<p>DYMO’s throughput for laptop nodes.</p>
Full article ">Figure 14
<p>DYMO’s throughput for mobile nodes.</p>
Full article ">Figure 15
<p>DYMO’s throughput for fixed nodes.</p>
Full article ">Figure 16
<p>OLSR’s throughput for laptop nodes.</p>
Full article ">Figure 17
<p>OLSR’s throughput for fixed nodes.</p>
Full article ">Figure 18
<p>OLSR’s throughput for mobile nodes.</p>
Full article ">Figure 19
<p>Packet delivery ratio for AODV.</p>
Full article ">Figure 20
<p>Packet delivery ratio for BATMAN.</p>
Full article ">Figure 21
<p>Packet delivery ratio for DYMO.</p>
Full article ">Figure 22
<p>Packet delivery ratio for OLSR.</p>
Full article ">
33 pages, 2092 KiB  
Article
SentimentFormer: A Transformer-Based Multimodal Fusion Framework for Enhanced Sentiment Analysis of Memes in Under-Resourced Bangla Language
by Fatema Tuj Johora Faria, Laith H. Baniata, Mohammad H. Baniata, Mohannad A. Khair, Ahmed Ibrahim Bani Ata, Chayut Bunterngchit and Sangwoo Kang
Electronics 2025, 14(4), 799; https://doi.org/10.3390/electronics14040799 - 18 Feb 2025
Abstract
Social media has increasingly relied on memes as a tool for expressing opinions, making meme sentiment analysis an emerging area of interest for researchers. While much of the research has focused on English-language memes, under-resourced languages, such as Bengali, have received limited attention. [...] Read more.
Social media has increasingly relied on memes as a tool for expressing opinions, making meme sentiment analysis an emerging area of interest for researchers. While much of the research has focused on English-language memes, under-resourced languages, such as Bengali, have received limited attention. Given the surge in social media use, the need for sentiment analysis of memes in these languages has become critical. One of the primary challenges in this field is the lack of benchmark datasets, particularly in languages with fewer resources. To address this, we used the MemoSen dataset, designed for Bengali, which consists of 4368 memes annotated with three sentiment labels: positive, negative, and neutral. MemoSen is divided into training (70%), test (20%), and validation (10%) sets, with an imbalanced class distribution: 1349 memes in the positive class, 2728 in the negative class, and 291 in the neutral class. Our approach leverages advanced deep learning techniques for multimodal sentiment analysis in Bengali, introducing three hybrid approaches. SentimentTextFormer is a text-based, fine-tuned model that utilizes state-of-the-art transformer architectures to accurately extract sentiment-related insights from Bengali text, capturing nuanced linguistic features. SentimentImageFormer is an image-based model that employs cutting-edge transformer-based techniques for precise sentiment classification through visual data. Lastly, SentimentFormer is a hybrid model that seamlessly integrates both text and image modalities using fusion strategies. Early fusion combines textual and visual features at the input level, enabling the model to jointly learn from both modalities. Late fusion merges the outputs of separate text and image models, preserving their individual strengths for the final prediction. Intermediate fusion integrates textual and visual features at intermediate layers, refining their interactions during processing. These fusion strategies combine the strengths of both textual and visual data, enhancing sentiment analysis by exploiting complementary information from multiple sources. The performance of our models was evaluated using various accuracy metrics, with SentimentTextFormer achieving 73.31% accuracy and SentimentImageFormer attaining 64.72%. The hybrid model, SentimentFormer (SwiftFormer with mBERT), employing intermediate fusion, shows a notable improvement in accuracy, achieving 79.04%, outperforming SentimentTextFormer by 5.73% and SentimentImageFormer by 14.32%. Among the fusion strategies, SentimentFormer (SwiftFormer with mBERT) achieved the highest accuracy of 79.04%, highlighting the effectiveness of our fusion technique and the reliability of our multimodal framework in improving sentiment analysis accuracy across diverse modalities. Full article
Show Figures

Figure 1

Figure 1
<p>Distribution of samples Across train, test, and validation sets in the MemoSen dataset.</p>
Full article ">Figure 2
<p>Representative examples from the MemoSen dataset, illustrating memes labeled with positive, neutral, and negative sentiments.</p>
Full article ">Figure 3
<p>Unimodal sentiment classification framework for Bangla meme captions.</p>
Full article ">Figure 4
<p>Unimodal sentiment classification framework for meme images.</p>
Full article ">Figure 5
<p>Fusion framework for enhanced multimodal sentiment analysis of Bangla memes.</p>
Full article ">Figure 6
<p>Confusion matrices of SentimentTextFormer, SentimentImageFormer, and SentimentFormer showcasing their sentiment classification performance on the MemoSen dataset.</p>
Full article ">Figure 7
<p>Error analysis of multimodal sentiment classification in Bengali memes.</p>
Full article ">
20 pages, 529 KiB  
Article
Directed Consumer-Generated Content (DCGC) for Social Media Marketing: Analyzing Performance Metrics from a Field Experiment in the Publishing Industry
by Eleni Ntousi, Chris Lazaris, Pavlina Katiaj and Anastasios Koukopoulos
Systems 2025, 13(2), 124; https://doi.org/10.3390/systems13020124 - 17 Feb 2025
Abstract
This study examines the efficacy of a novel form of consumer-generated content (CGC) digital advertising, termed “directed” consumer-generated content (DCGC), in comparison to traditional brand-created social media advertisements. The analysis focuses on performance metrics and return on ad spend (ROAS). Data were gathered [...] Read more.
This study examines the efficacy of a novel form of consumer-generated content (CGC) digital advertising, termed “directed” consumer-generated content (DCGC), in comparison to traditional brand-created social media advertisements. The analysis focuses on performance metrics and return on ad spend (ROAS). Data were gathered from social media campaigns incorporating both DCGC and non-CGC through a field experiment, followed by a rigorous statistical analysis to identify the most effective advertising strategies. Findings indicate that DCGC typically results in significantly higher conversion rates, increased conversions, and superior ROAS. Overall, DCGC advertisements demonstrate enhanced performance relative to non-CGC campaigns, suggesting that they represent a more strategic allocation of a brand’s marketing resources, particularly when the primary objective is to drive sales and achieve elevated conversion rates. This research contributes to the academic discourse and practical implementation of social media advertising by highlighting the advantages of DCGC as a cost-effective and efficient advertising approach for brands. Full article
(This article belongs to the Special Issue Complex Systems for E-commerce and Business Management)
Show Figures

Figure 1

Figure 1
<p>Hypotheses and metrics graphical summary.</p>
Full article ">
21 pages, 2675 KiB  
Article
Cyberbullying Detection, Prevention, and Analysis on Social Media via Trustable LSTM-Autoencoder Networks over Synthetic Data: The TLA-NET Approach
by Alfredo Cuzzocrea, Mst Shapna Akter, Hossain Shahriar and Pablo García Bringas
Future Internet 2025, 17(2), 84; https://doi.org/10.3390/fi17020084 - 12 Feb 2025
Abstract
The plague of cyberbullying on social media exerts a dangerous influence on human lives. Due to the fact that online social networks continue to daily expand, the proliferation of hate speech is also growing. Consequentially, distressing content is often implicated in the onset [...] Read more.
The plague of cyberbullying on social media exerts a dangerous influence on human lives. Due to the fact that online social networks continue to daily expand, the proliferation of hate speech is also growing. Consequentially, distressing content is often implicated in the onset of depression and suicide-related behaviors. In this paper, we propose an innovative framework, named as the trustable LSTM-autoencoder network (TLA NET), which is designed for the detection of cyberbullying on social media by employing synthetic data. We introduce a state-of-the-art method for the automatic production of translated data, which are aimed at tackling data availability issues. Several languages, including Hindi and Bangla, continue to face research limitations due to the absence of adequate datasets. Experimental identification of aggressive comments is carried out via datasets in Hindi, Bangla, and English. By employing TLA NET and traditional models, such as long short-term memory (LSTM), bidirectional long short-term memory (BiLSTM), the LSTM-autoencoder, Word2vec, bidirectional encoder representations from transformers (BERT), and the Generative Pre-trained Transformer 2 (GPT-2), we perform the experimental identification of aggressive comments in datasets in Hindi, Bangla, and English. In addition to this, we employ evaluation metrics that include the F1-score, accuracy, precision, and recall, to assess the performance of the models. Our model demonstrates outstanding performance across all the datasets by achieving a remarkable 99% accuracy and positioning itself as a frontrunner when compared to previous works that make use of the dataset featured in this research. Full article
(This article belongs to the Section Cybersecurity)
Show Figures

Figure 1

Figure 1
<p>Example of text data in the target dataset.</p>
Full article ">Figure 2
<p>Overview of the modified LSTM-autoencoder network.</p>
Full article ">Figure 3
<p>Binary classification performance evaluation of different models with the proposed approach.</p>
Full article ">Figure 4
<p>Binary classification error evaluation of different models with the proposed approach.</p>
Full article ">Figure 5
<p>Confusion matrices for binary classification labels across different models.</p>
Full article ">Figure 6
<p>ROC curves for the binary classification labels across different models.</p>
Full article ">Figure 7
<p>Multi-class classification performance evaluation of different models with the proposed approach.</p>
Full article ">Figure 8
<p>Multi-class classification error evaluation of different models with the proposed approach.</p>
Full article ">Figure 9
<p>Confusion matrices for multi-class classification labels across different models.</p>
Full article ">Figure 10
<p>ROC curves for multi-class classification labels across different models.</p>
Full article ">
20 pages, 8021 KiB  
Article
CNN 1D: A Robust Model for Human Pose Estimation
by Mercedes Hernández de la Cruz, Uriel Solache, Antonio Luna-Álvarez, Sergio Ricardo Zagal-Barrera, Daniela Aurora Morales López and Dante Mujica-Vargas
Information 2025, 16(2), 129; https://doi.org/10.3390/info16020129 - 10 Feb 2025
Abstract
The purpose of this research is to develop an efficient model for human pose estimation (HPE). The main limitations of the study include the small size of the dataset and confounds in the classification of certain poses, suggesting the need for more data [...] Read more.
The purpose of this research is to develop an efficient model for human pose estimation (HPE). The main limitations of the study include the small size of the dataset and confounds in the classification of certain poses, suggesting the need for more data to improve the robustness of the model in uncontrolled environments. The methodology used combines MediaPipe for the detection of key points in images with a CNN1D model that processes preprocessed feature sequences. The Yoga Poses dataset was used for the training and validation of the model, and resampling techniques, such as bootstrapping, were applied to improve accuracy and avoid overfitting in the training. The results show that the proposed model achieves 96% overall accuracy in the classification of five yoga poses, with accuracy metrics above 90% for all classes. The implementation of the CNN1D model instead of traditional 2D or 3D architectures accomplishes the goal of maintaining a low computational cost and efficient preprocessing of the images, allowing for its use on mobile devices and real-time environments. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

Figure 1
<p>Proposed methodology phases.</p>
Full article ">Figure 2
<p>Various postures existing in the dataset representation.</p>
Full article ">Figure 3
<p>The extraction process of features created by BlazePose. First, it recognizes the human silhouette, and subsequently, it detects the main joints of the human anatomy.</p>
Full article ">Figure 4
<p>Real-time detection generated by CNN model.</p>
Full article ">Figure 5
<p>K-fold training, precision, and loss function verification with dropout rate de 0.2. (<b>a</b>) Average accuracy in training with a 0.2 dropout rate. (<b>b</b>) Average loss function in training with a 0.2 dropout rate.</p>
Full article ">Figure 6
<p>K-fold training, precision, and loss function verification with a 0.5 dropout rate. (<b>a</b>) Average accuracy in training with a 0.5 dropout rate. (<b>b</b>) Average loss function in training with a 0.5 dropout rate.</p>
Full article ">Figure 7
<p>Verification of the model CNN1D’s generalization of images not seen in training and validation. (<b>a</b>) Down-dog posture—image nonexistent in the trial and training set. (<b>b</b>) Landmarks generated by the CNN1D model for prediction and recognition.</p>
Full article ">Figure 8
<p>Correct predictions of the poses to be estimated by the CNN model. (<b>a</b>) Correct warrior II detection. (<b>b</b>) Correct goddess detection.</p>
Full article ">
22 pages, 3785 KiB  
Article
Visual Footprint of Separation Through Membrane Distillation on YouTube
by Ersin Aytaç and Mohamed Khayet
Data 2025, 10(2), 24; https://doi.org/10.3390/data10020024 - 8 Feb 2025
Abstract
Social media has revolutionized the dissemination of information, enabling the rapid and widespread sharing of news, concepts, technologies, and ideas. YouTube is one of the most important online video sharing platforms of our time. In this research, we investigate the trace of separation [...] Read more.
Social media has revolutionized the dissemination of information, enabling the rapid and widespread sharing of news, concepts, technologies, and ideas. YouTube is one of the most important online video sharing platforms of our time. In this research, we investigate the trace of separation through membrane distillation (MD) on YouTube using statistical methods and natural language processing. The dataset collected on 04.01.2024 included 212 videos with key characteristics such as durations, views, subscribers, number of comments, likes, etc. The results show that the number of videos is not sufficient, but there is an increasing trend, especially since 2019. The high number of channels offering information about MD technology in countries such as the USA, India, and Canada indicates that these countries recognized the practical benefits of this technology, especially in areas such as water treatment, desalination, and industrial applications. This suggests that MD could play a pivotal role in finding solutions to global water challenges. Word cloud analysis showed that terms such as “water”, “treatment”, “desalination”, and “separation” were prominent, indicating that the videos focused mainly on the principles and applications of MD. The sentiment of the comments is mostly positive, and the dominant emotion is neutral, revealing that viewers generally have a positive attitude towards MD. The narrative intensity metric evaluates the information transfer efficiency of the videos and provides a guide for effective content creation strategies. The results of the analyses revealed that social media awareness about MD technology is still not sufficient and that content development and sharing strategies should focus on bringing the technology to a wider audience. Full article
Show Figures

Figure 1

Figure 1
<p>Overall features of the collected MD video dataset.</p>
Full article ">Figure 2
<p>Number of MD videos uploaded to YouTube annually.</p>
Full article ">Figure 3
<p>Pair plot of number of views, likes, comments, subscribers of the MD videos.</p>
Full article ">Figure 4
<p>Distributions of video language (reds), video type (greens), and comment language (blues) of the MD videos.</p>
Full article ">Figure 5
<p>Top content creators who uploaded the most MD-related videos on YouTube.</p>
Full article ">Figure 6
<p>Geographic distribution of content creators producing MD-related content on YouTube.</p>
Full article ">Figure 7
<p>Word cloud of most frequently appearing words in MD video transcripts.</p>
Full article ">Figure 8
<p>(<b>a</b>) Sentiment and (<b>b</b>) emotion analysis of the comments on MD videos uploaded on YouTube.</p>
Full article ">Figure 9
<p>Narrative intensity (words/second) heatmap of MD-related YouTube videos in English.</p>
Full article ">
16 pages, 5984 KiB  
Article
Automated Scattering Media Estimation in Peplography Using SVD and DCT
by Seungwoo Song, Hyun-Woo Kim, Myungjin Cho and Min-Chul Lee
Electronics 2025, 14(3), 545; https://doi.org/10.3390/electronics14030545 - 29 Jan 2025
Abstract
In this paper, we propose automation of estimating scattering media information in peplography using singular value decomposition (SVD) and discrete cosine transform (DCT). Conventional scattering media-removal methods reduce light scattering in images utilizing a variety of image-processing techniques and machine learning algorithms. However, [...] Read more.
In this paper, we propose automation of estimating scattering media information in peplography using singular value decomposition (SVD) and discrete cosine transform (DCT). Conventional scattering media-removal methods reduce light scattering in images utilizing a variety of image-processing techniques and machine learning algorithms. However, under conditions of heavy scattering media, they may not clearly visualize the object information. Peplography has been proposed as a solution to this problem. Peplography is capable of visualizing the object information by estimating the scattering media information and detecting the ballistic photons from heavy scattering media. Following that, 3D information can be obtained by integral imaging. However, it is difficult to apply this method to real-world situations since the process of scattering media estimation in peplography is not automated. To overcome this problem, we use automatic scattering media-estimation methods using SVD and DCT. They can estimate the scattering media information automatically by truncating the singular value matrix and Gaussian low-pass filter in the frequency domain. To evaluate our proposed method, we implement the experiment with two different conditions and compare the result image with the conventional method using metrics such as structural similarity (SSIM), feature similarity (FSIMc), gradient magnitude similarity deviation (GMSD), and learned perceptual image path similarity (LPIPS). Full article
(This article belongs to the Special Issue Computational Imaging and Its Application)
Show Figures

Figure 1

Figure 1
<p>Flowchart of peplography.</p>
Full article ">Figure 2
<p>Flowchart of photon-counting algorithm in peplography.</p>
Full article ">Figure 3
<p>Concept of the camera array-based pickup method.</p>
Full article ">Figure 4
<p>Concept of scattering media estimation in peplography, where ∗ represents a convolution operator.</p>
Full article ">Figure 5
<p>Singular value decomposition (SVD).</p>
Full article ">Figure 6
<p>Concept of the discrete cosine transform (DCT).</p>
Full article ">Figure 7
<p>Flowchart of the proposed method.</p>
Full article ">Figure 8
<p>Experiment setup of the scattering media environment.</p>
Full article ">Figure 9
<p>Scattering media situations. (<b>a</b>) Reference image and (<b>b</b>) 2D single peplogram.</p>
Full article ">Figure 10
<p>Reconstructed 3D images. (<b>a</b>) Reference image, (<b>b</b>) single peplogram, (<b>c</b>) reconstructed 3D image by the conventional peplography, and (<b>d</b>) reconstructed 3D image by our proposed method where the depth is 474 mm.</p>
Full article ">Figure 11
<p>Reconstructed 3D images. (<b>a</b>) Reference image, (<b>b</b>) single peplogram, (<b>c</b>) reconstructed 3D image by the conventional peplography, and (<b>d</b>) reconstructed 3D image by our proposed method where the depth is 504 mm.</p>
Full article ">Figure 12
<p>Results of each IQA method.</p>
Full article ">Figure 13
<p>Experiment under changed conditions. (<b>a</b>) Reference image and (<b>b</b>) 2D single peplogram.</p>
Full article ">Figure 14
<p>Reconstructed 3D images. (<b>a</b>) Reference image, (<b>b</b>) single peplogram, (<b>c</b>) reconstructed 3D image by the conventional peplography, and (<b>d</b>) reconstructed 3D image by our proposed method, where the depth is 616 mm.</p>
Full article ">Figure 15
<p>Reconstructed 3D images. (<b>a</b>) Reference image, (<b>b</b>) single peplogram, (<b>c</b>) reconstructed 3D image by the conventional peplography, and (<b>d</b>) reconstructed 3D image by our proposed method, where the depth is 631 mm.</p>
Full article ">Figure 16
<p>Reconstructed 3D images. (<b>a</b>) Reference image, (<b>b</b>) single peplogram, (<b>c</b>) reconstructed 3D image by the conventional peplography, and (<b>d</b>) reconstructed 3D image by our proposed method, where the depth is 676 mm.</p>
Full article ">Figure 17
<p>Results of each IQA method.</p>
Full article ">
19 pages, 633 KiB  
Article
Integrating Sentiment Analysis and Reinforcement Learning for Equitable Disaster Response: A Novel Approach
by Saad Alqithami
Sustainability 2025, 17(3), 1072; https://doi.org/10.3390/su17031072 - 28 Jan 2025
Abstract
Efficient disaster response requires dynamic and adaptive resource allocation strategies that account for evolving public needs, real-time sentiment, and sustainability concerns. In this study, a sentiment-driven framework is proposed, integrating reinforcement learning, natural language processing, and gamification to optimize the distribution of resources [...] Read more.
Efficient disaster response requires dynamic and adaptive resource allocation strategies that account for evolving public needs, real-time sentiment, and sustainability concerns. In this study, a sentiment-driven framework is proposed, integrating reinforcement learning, natural language processing, and gamification to optimize the distribution of resources such as water, food, medical aid, shelter, and electricity during disaster scenarios. The model leverages real-time social media data to capture public sentiment, combines it with geospatial and temporal information, and then trains a reinforcement learning agent to maximize both community satisfaction and equitable resource allocation. The model achieved equity scores of up to 0.5 and improved satisfaction metrics by 30%, which outperforms static allocation baselines. By incorporating a gamified simulation platform, stakeholders can interactively refine policies and address the inherent uncertainties of disaster events. This approach highlights the transformative potential of using advanced artificial intelligence techniques to enhance adaptability, promote sustainability, and foster collaborative decision-making in humanitarian aid efforts. Full article
(This article belongs to the Section Hazards and Sustainability)
Show Figures

Figure 1

Figure 1
<p>Conceptual diagram of the proposed framework for disaster resource allocation.</p>
Full article ">Figure 2
<p>Temporal distribution of disaster-related tweets. Peaks align with major events (e.g., hurricanes and floods).</p>
Full article ">Figure 3
<p>Keyword distribution in raw dataset, indicating relative frequency of resource mentions.</p>
Full article ">Figure 4
<p>Screenshot of the gamified simulation environment in dash reflecting interactive resource allocation and visualization tools.</p>
Full article ">Figure 5
<p>Temporal trends for resource demands: (<b>a</b>) electricity, (<b>b</b>) food, (<b>c</b>) shelter, (<b>d</b>) medical, and (<b>e</b>) water. Each subfigure reflects sentiment-driven insights from tweets over the time period under study.</p>
Full article ">Figure 6
<p>Equity distribution: histogram of equity scores showing how resource allocation is spread across regions. Higher values indicate more uniform distribution, while lower values suggest resource concentration.</p>
Full article ">Figure 7
<p>Trade-offs between equity and satisfaction in resource allocation. Clusters in the upper right region imply that resource distribution meets a large proportion of demands while preserving fairness.</p>
Full article ">
19 pages, 529 KiB  
Review
Redefining Event Detection and Information Dissemination: Lessons from X (Twitter) Data Streams and Beyond
by Harshit Srivastava and Ravi Sankar
Computers 2025, 14(2), 42; https://doi.org/10.3390/computers14020042 - 28 Jan 2025
Abstract
X (formerly known as Twitter), Reddit, and other social media forums have dramatically changed the way society interacts with live events in this day and age. The huge amount of data generated by these platforms presents challenges, especially in terms of processing speed [...] Read more.
X (formerly known as Twitter), Reddit, and other social media forums have dramatically changed the way society interacts with live events in this day and age. The huge amount of data generated by these platforms presents challenges, especially in terms of processing speed and the complexity of finding meaningful patterns and events. These data streams are generated in multiple formats, with constant updating, and are real-time in nature; thus, they require sophisticated algorithms capable of dynamic event detection in this dynamic environment. Event detection techniques have recently achieved substantial development, but most research carried out so far evaluates only single methods, not comparing the overall performance of these methods across multiple platforms and types of data. With that view, this paper represents a deep investigation of complex state-of-the-art event detection algorithms specifically customized for streams of data from X. We review various current techniques based on a thorough comparative performance test and point to problems inherently related to the detection of patterns in high-velocity streams with noise. We introduce some novelty to this research area, supported by appropriate robust experimental frameworks, to performed comparisons quantitatively and qualitatively. We provide insight into how those algorithms perform under varying conditions by defining a set of clear, measurable metrics. Our findings contribute new knowledge that will help inform future research into the improvement of event detection systems for dynamic data streams and enhance their capabilities for real-time and actionable insights. This paper will go a step further than the present knowledge of event detection and discuss how algorithms can be adapted and refined in view of the emerging demands imposed by data streams. Full article
(This article belongs to the Special Issue Recent Advances in Social Networks and Social Media)
Show Figures

Figure 1

Figure 1
<p>Analysis techniques and design flow.</p>
Full article ">
23 pages, 20134 KiB  
Article
The Development and Validation of an Artificial Intelligence Model for Estimating Thumb Range of Motion Using Angle Sensors and Machine Learning: Targeting Radial Abduction, Palmar Abduction, and Pronation Angles
by Yutaka Ehara, Atsuyuki Inui, Yutaka Mifune, Kohei Yamaura, Tatsuo Kato, Takahiro Furukawa, Shuya Tanaka, Masaya Kusunose, Shunsaku Takigami, Shin Osawa, Daiji Nakabayashi, Shinya Hayashi, Tomoyuki Matsumoto, Takehiko Matsushita and Ryosuke Kuroda
Appl. Sci. 2025, 15(3), 1296; https://doi.org/10.3390/app15031296 - 27 Jan 2025
Abstract
An accurate assessment of thumb range of motion is crucial for diagnosing musculoskeletal conditions, evaluating functional impairments, and planning effective rehabilitation strategies. In this study, we aimed to enhance the accuracy of estimating thumb range of motion using a combination of MediaPipe, which [...] Read more.
An accurate assessment of thumb range of motion is crucial for diagnosing musculoskeletal conditions, evaluating functional impairments, and planning effective rehabilitation strategies. In this study, we aimed to enhance the accuracy of estimating thumb range of motion using a combination of MediaPipe, which is an AI-based posture estimation library, and machine learning methods, taking the values obtained using angle sensors to be the true values. Radial abduction, palmar abduction, and pronation angles were estimated using MediaPipe based on coordinates detected from videos of 18 healthy participants (nine males and nine females with an age range of 30–49 years) selected to reflect a balanced distribution of height and other physical characteristics. A conical thumb movement model was constructed, and parameters were generated based on the coordinate data. Five machine learning models were evaluated, with LightGBM achieving the highest accuracy across all metrics. Specifically, for radial abduction, palmar abduction, and supination, the root mean square error (RMSE), mean absolute error (MAE), coefficient of determination (R2), and correlation coefficient were 4.67°, 3.41°, 0.94, and 0.97; 4.63°, 3.41°, 0.95, and 0.98; and 5.69°, 4.17°, 0.88, and 0.94, respectively. These results demonstrate that when estimating thumb range of motion, the AI model trained using angle sensor data and LightGBM achieved accuracy that was high and comparable to that of prior methods involving the use of MediaPipe and a protractor. Full article
(This article belongs to the Special Issue Research on Machine Learning in Computer Vision)
Show Figures

Figure 1

Figure 1
<p>A comprehensive set of hand anthropometric measurements, including hand, finger, and phalangeal lengths, were recorded based on the fully extended and abducted right hand of each participant.</p>
Full article ">Figure 2
<p>Validation of the angle sensor used for reproducing thumb movements.</p>
Full article ">Figure 3
<p>The angle sensor was fixed to the dorsal midpoint of the metacarpal bone of the right thumb along its bone axis using tape. The sensor measured radial abduction, palmar abduction, and pronation angles.</p>
Full article ">Figure 4
<p>The marker trajectories of the carpometacarpal (CMC), metacarpophalangeal (MP), and interphalangeal (IP) joints were assumed to align along the same straight line and concentric sphere. The apex of the cone was assumed to be the thumb CMC joint, the second metacarpal was the base axis, the index MP joint was the center, and the thumb IP joint formed the base of the cone, with the thumb performing a circular motion. The rotation angle, θ, was defined as 0° on the plane of the palm. The right thumb was then moved in a range of θ = 0° to 90° in its maximum abduction position.</p>
Full article ">Figure 5
<p>The tablet was positioned 50 cm above the table, 100 cm from the subject, and at an angle of 45° with respect to the palm for video recording.</p>
Full article ">Figure 6
<p>MediaPipe landmarks and examples of hand coordinates detected using MediaPipe: detection of the bounding box for relatively rigid parts of the hand and simulation of the hand skeleton. (0) WRIST. (1) THUMB_CMC. (2) THUMB_MCP. (3) THUMB_IP. (4) THUMB_TIP. (5) INDEX_FINGER_MCP. (6) INDEX_FINGER_PIP. (7) INDEX_FINGER_DIP. (8) INDEX_FINGER_TIP. (9) MIDDLE_FINGER_MCP. (10) MIDDLE_FINGER_PIP. (11) MIDDLE_FINGER_DIP. (12) MIDDLE_FINGER_TIP. (13) RING_FINGER_MCP. (14) RING_FINGER_PIP. (15) RING_FIGER_DIP. (16) RING_FIGER_TIP. (17) PINKY_MCP. (18) PINKY_PIP. (19) PINKY_DIP. (20) PINKY_TIP.</p>
Full article ">Figure 7
<p>Workflow for data acquisition and the machine learning processes.</p>
Full article ">Figure 8
<p>(<b>a</b>) The actual angles obtained from the test data for radial abduction compared with their predicted angles obtained from the training data using the linear regression model. (<b>b</b>) The residuals (actual angles − predicted angles) of the linear regression model for radial abduction plotted and compared against the actual angles in the test data.</p>
Full article ">Figure 9
<p>(<b>a</b>) The actual angles from the test data for radial abduction compared with their predicted angles obtained from the training data using the ElasticNet model. (<b>b</b>) The residuals (actual angles − predicted angles) of the ElasticNet model for radial abduction plotted and compared against the actual angles in the test data.</p>
Full article ">Figure 10
<p>(<b>a</b>) The actual angles from the test data for radial abduction compared with their predicted angles obtained from the training data using the SVM model. (<b>b</b>) The residuals (actual angles − predicted angles) of the SVM model for radial abduction plotted and compared against the actual angles in the test data.</p>
Full article ">Figure 11
<p>(<b>a</b>) The actual angles from the test data for radial abduction compared with their predicted angles obtained from the training data using the random forest regression model. (<b>b</b>) The residuals (actual angles − predicted angles) of the random forest regression model for radial abduction plotted and compared against the actual angles in the test data.</p>
Full article ">Figure 12
<p>(<b>a</b>) The actual angles from the test data for radial abduction compared with their predicted angles obtained from the training data using the LightGBM model. (<b>b</b>) The residuals (actual angles − predicted angles) of the LightGBM model for radial abduction plotted and compared against the actual angles in the test data.</p>
Full article ">Figure 13
<p>(<b>a</b>) The actual angles from the test data for palmar abduction compared with their predicted angles obtained from the training data using the linear regression model. (<b>b</b>) The residuals (actual angles − predicted angles) of the linear regression model for palmar abduction plotted and compared against the actual angles in the test data.</p>
Full article ">Figure 14
<p>(<b>a</b>) The actual angles from the test data for palmar abduction compared with their predicted angles obtained from the training data using the ElasticNet model. (<b>b</b>) The residuals (actual angles − predicted angles) of the ElasticNet model for palmar abduction plotted and compared against the actual angles in the test data.</p>
Full article ">Figure 15
<p>(<b>a</b>) The actual angles from the test data for palmar abduction compared with their predicted angles obtained from the training data using the SVM model. (<b>b</b>) The residuals (actual angles − predicted angles) of the SVM model for palmar abduction plotted and compared against the actual angles in the test data.</p>
Full article ">Figure 16
<p>(<b>a</b>) The actual angles from the test data for palmar abduction compared with their predicted angles obtained from the training data using the random forest regression model. (<b>b</b>) The residuals (actual angles − predicted angles) of the random forest regression model for palmar abduction plotted and compared against the actual angles in the test data.</p>
Full article ">Figure 17
<p>(<b>a</b>) The actual angles from the test data for palmar abduction compared with their predicted angles obtained from the training data using the LightGBM model. (<b>b</b>) The residuals (actual angles − predicted angles) of the LightGBM model for palmar abduction plotted and compared against the actual angles in the test data.</p>
Full article ">Figure 18
<p>(<b>a</b>) The actual angles from the test data for pronation compared with their predicted angles obtained from the training data using the linear regression model. (<b>b</b>) The residuals (actual angles − predicted angles) of the linear regression model for pronation plotted and compared against the actual angles in the test data.</p>
Full article ">Figure 19
<p>(<b>a</b>) The actual angles from the test data for pronation compared with their predicted angles obtained from the training data using the ElasticNet model. (<b>b</b>) The residuals (actual angles − predicted angles) of the ElasticNet model for pronation plotted and compared against the actual angles in the test data.</p>
Full article ">Figure 20
<p>(<b>a</b>) The actual angles from the test data for pronation compared with their predicted angles obtained from the training data using the SVM model. (<b>b</b>) The residuals (actual angles − predicted angles) of the SVM model for pronation plotted and compared against the actual angles in the test data.</p>
Full article ">Figure 21
<p>(<b>a</b>) The actual angles from the test data for pronation compared with their predicted angles obtained from the training data using the random forest regression model. (<b>b</b>) The residuals (actual angles − predicted angles) of the random forest regression model for pronation plotted and compared against the actual angles in the test data.</p>
Full article ">Figure 22
<p>(<b>a</b>) The actual angles from the test data for pronation compared with their predicted angles obtained from the training data using the LightGBM model. (<b>b</b>) The residuals (actual angles − predicted angles) of the LightGBM model for pronation plotted and compared against the actual angles in the test data.</p>
Full article ">Figure 23
<p>Feature importance and SHAP values for LightGBM for radial abduction.</p>
Full article ">Figure 24
<p>Feature importance and SHAP values for LightGBM for palmar abduction.</p>
Full article ">Figure 25
<p>Feature importance and SHAP values for LightGBM for pronation.</p>
Full article ">
18 pages, 1202 KiB  
Article
Enhancing News Articles: Automatic SEO Linked Data Injection for Semantic Web Integration
by Hamza Salem, Hadi Salloum, Osama Orabi, Kamil Sabbagh and Manuel Mazzara
Appl. Sci. 2025, 15(3), 1262; https://doi.org/10.3390/app15031262 - 26 Jan 2025
Abstract
This paper presents a novel solution aimed at enhancing news web pages for seamless integration into the Semantic Web. By utilizing advanced pattern mining techniques alongside OpenAI’s GPT-3, we rewrite news articles to improve their readability and accessibility for Google News aggregators. Our [...] Read more.
This paper presents a novel solution aimed at enhancing news web pages for seamless integration into the Semantic Web. By utilizing advanced pattern mining techniques alongside OpenAI’s GPT-3, we rewrite news articles to improve their readability and accessibility for Google News aggregators. Our approach is characterized by its methodological rigour and is evaluated through quantitative metrics, validated using Google’s Rich Results Test API to confirm adherence to Google’s structured data guidelines. In this process, a “Pass” in the Rich Results Test is taken as an indication of eligibility for rich results, demonstrating the effectiveness of our generated structured data. The impact of our work is threefold: it advances the technological integration of a substantial segment of the web into the Semantic Web, promotes the adoption of Semantic Web technologies within the news sector, and significantly enhances the discoverability of news articles in aggregator platforms. Furthermore, our solution facilitates the broader dissemination of news content to diverse audiences. This submission introduces an innovative solution substantiated by empirical evidence of its impact and methodological soundness, thereby making a significant contribution to the field of Semantic Web research, particularly in the context of news and media articles. Full article
Show Figures

Figure 1

Figure 1
<p>Article processing workflow.</p>
Full article ">Figure 2
<p>News title and body detection.</p>
Full article ">Figure 3
<p>Architecture overview.</p>
Full article ">Figure 4
<p>Google Rich Results Checker.</p>
Full article ">Figure 5
<p>Original JSON-LD data.</p>
Full article ">Figure 6
<p>Generated JSON-LD data.</p>
Full article ">Figure 7
<p>Histogram of similarity scores for article body (<a href="http://independent.co.uk" target="_blank">independent.co.uk</a>).</p>
Full article ">Figure 8
<p>Histogram of similarity scores for article title (<a href="http://Skynewsarabia.com" target="_blank">Skynewsarabia.com</a>).</p>
Full article ">
16 pages, 603 KiB  
Article
Comprehensive Evaluation of Deepfake Detection Models: Accuracy, Generalization, and Resilience to Adversarial Attacks
by Maryam Abbasi, Paulo Váz, José Silva and Pedro Martins
Appl. Sci. 2025, 15(3), 1225; https://doi.org/10.3390/app15031225 - 25 Jan 2025
Viewed by 197
Abstract
The rise of deepfakes—synthetic media generated using artificial intelligence—threatens digital content authenticity, facilitating misinformation and manipulation. However, deepfakes can also depict real or entirely fictitious individuals, leveraging state-of-the-art techniques such as generative adversarial networks (GANs) and emerging diffusion-based models. Existing detection methods face [...] Read more.
The rise of deepfakes—synthetic media generated using artificial intelligence—threatens digital content authenticity, facilitating misinformation and manipulation. However, deepfakes can also depict real or entirely fictitious individuals, leveraging state-of-the-art techniques such as generative adversarial networks (GANs) and emerging diffusion-based models. Existing detection methods face challenges with generalization across datasets and vulnerability to adversarial attacks. This study focuses on subsets of frames extracted from the DeepFake Detection Challenge (DFDC) and FaceForensics++ videos to evaluate three convolutional neural network architectures—XCeption, ResNet, and VGG16—for deepfake detection. Performance metrics include accuracy, precision, F1-score, AUC-ROC, and Matthews Correlation Coefficient (MCC), combined with an assessment of resilience to adversarial perturbations via the Fast Gradient Sign Method (FGSM). Among the tested models, XCeption achieves the highest accuracy (89.2% on DFDC), strong generalization, and real-time suitability, while VGG16 excels in precision and ResNet provides faster inference. However, all models exhibit reduced performance under adversarial conditions, underscoring the need for enhanced resilience. These findings indicate that robust detection systems must consider advanced generative approaches, adversarial defenses, and cross-dataset adaptation to effectively counter evolving deepfake threats. Full article
Show Figures

Figure 1

Figure 1
<p>Training and validation loss of XCeption, ResNet-50, and VGG16 over 30 epochs on the DFDC dataset.</p>
Full article ">Figure 2
<p>Cross-dataset generalization with 10% fine tuning on FaceForensics++.</p>
Full article ">Figure 3
<p>Confusion matrix for XCeption on the DFDC dataset. XCeption demonstrates balanced performance with a low false positive rate and high recall, effectively capturing deepfake-specific features.</p>
Full article ">Figure 4
<p>Confusion matrix for ResNet-50 on the DFDC dataset. ResNet-50 exhibits higher false positives, misclassifying real frames as fake, impacting its precision.</p>
Full article ">Figure 5
<p>Confusion matrix for VGG16 on the DFDC Dataset. VGG16 achieves balanced performance but faces challenges with subtle manipulations, leading to occasional false negatives.</p>
Full article ">
21 pages, 1454 KiB  
Article
Adherence to the Singapore Integrated 24 h Activity Guidelines for Pre-Primary School Children Before, During and After the COVID-19 Lockdown in Singapore
by Seow Ting Low, Terence Buan Kiong Chua, Dan Li and Michael Chia
Sports 2025, 13(2), 32; https://doi.org/10.3390/sports13020032 - 23 Jan 2025
Viewed by 394
Abstract
The COVID-19 pandemic has significantly disrupted the lives of pre-primary school children in Singapore where increased infection rates prompted lockdown measures that altered children’s daily routines. This study aimed to evaluate the impact of the pandemic on the lifestyle behaviours and health quality [...] Read more.
The COVID-19 pandemic has significantly disrupted the lives of pre-primary school children in Singapore where increased infection rates prompted lockdown measures that altered children’s daily routines. This study aimed to evaluate the impact of the pandemic on the lifestyle behaviours and health quality of 3134 children aged 5 to 6 years across three periods: pre-COVID, COVID-19 lockdown, and COVID-19 endemicity. Data were collected using the Surveillance of Digital Media Habits in Early Childhood Questionnaire (SMALLQ®) to measure on- and off-screen media habits of children and the Pediatric Quality of Life Inventory (PaedQL) to assess children’s health quality. Adherence to physical activity (PA) guidelines dropped from 32.7% pre-COVID to 27.4% during lockdown but improved to 34.4% in endemicity (p < 0.05). Sleep (SL) adherence followed a similar pattern, decreasing from 33.4% to 27.9% before rising to 40.6% (p < 0.05). Screen time (ST) adherence significantly declined during lockdown (16.7% to 10.8%, p < 0.001). Weak positive correlations with all PaedQL metrics were observed across periods, except during endemicity (p < 0.05). Concerted efforts involving key stakeholders must be made to mitigate the negative effects of the pandemic on children’s lifestyle behaviours and QoL, ensuring they are better prepared for the transition to primary school. Full article
(This article belongs to the Special Issue Advances in Motor Behavior and Child Health)
Show Figures

Figure 1

Figure 1
<p>Percentage of children who met none, one, two or all three guidelines each time period.</p>
Full article ">Figure 2
<p>PaedQL score for total health by the number of guidelines met across each time period of COVID-19.</p>
Full article ">Figure 3
<p>PaedQL score for psychosocial health by the number of guidelines met across each time period of COVID-19.</p>
Full article ">Figure 4
<p>PaedQL score for physical health by the number of guidelines met across each time period of COVID-19.</p>
Full article ">
20 pages, 17747 KiB  
Article
A Secure Learned Image Codec for Authenticity Verification via Self-Destructive Compression
by Chen-Hsiu Huang and Ja-Ling Wu
Big Data Cogn. Comput. 2025, 9(1), 14; https://doi.org/10.3390/bdcc9010014 - 15 Jan 2025
Viewed by 440
Abstract
In the era of deepfakes and AI-generated content, digital image manipulation poses significant challenges to image authenticity, creating doubts about the credibility of images. Traditional image forensics techniques often struggle to detect sophisticated tampering, and passive detection approaches are reactive, verifying authenticity only [...] Read more.
In the era of deepfakes and AI-generated content, digital image manipulation poses significant challenges to image authenticity, creating doubts about the credibility of images. Traditional image forensics techniques often struggle to detect sophisticated tampering, and passive detection approaches are reactive, verifying authenticity only after counterfeiting occurs. In this paper, we propose a novel full-resolution secure learned image codec (SLIC) designed to proactively prevent image manipulation by creating self-destructive artifacts upon re-compression. Once a sensitive image is encoded using SLIC, any subsequent re-compression or editing attempts will result in visually severe distortions, making the image’s tampering immediately evident. Because the content of an SLIC image is either original or visually damaged after tampering, images encoded with this secure codec hold greater credibility. SLIC leverages adversarial training to fine-tune a learned image codec that introduces out-of-distribution perturbations, ensuring that the first compressed image retains high quality while subsequent re-compressions degrade drastically. We analyze and compare the adversarial effects of various perceptual quality metrics combined with different learned codecs. Our experiments demonstrate that SLIC holds significant promise as a proactive defense strategy against image manipulation, offering a new approach to enhancing image credibility and authenticity in a media landscape increasingly dominated by AI-driven forgeries. Full article
Show Figures

Figure 1

Figure 1
<p>The proposed secure learned image codec (SLIC) will be self-destroyed after re-compression. Top images from left to right: the first encoded image, blurred encoded image, JPEG compressed image, affine transformed image (shifted 10 pixels, rotated 5°, scaled 95%), face-swapped image, and inpainted image. The bottom re-compressed images are severely damaged.</p>
Full article ">Figure 2
<p>The SLIC training flow. In a rate–distortion optimized neural codec, we introduce the adversarial re-compression and the adversarial noised re-compression losses.</p>
Full article ">Figure 3
<p>The trends of (<b>a</b>) adversarial loss <math display="inline"><semantics> <msub> <mi mathvariant="script">L</mi> <mi>A</mi> </msub> </semantics></math> and (<b>b</b>) re-compression PSNR among various perceptual metrics during training.</p>
Full article ">Figure 4
<p>The re-compression visual quality of the SLICs. The top row shows the source images and the remaining rows are the re-compressed images <math display="inline"><semantics> <msub> <mover accent="true"> <mi>x</mi> <mo stretchy="false">^</mo> </mover> <mn>2</mn> </msub> </semantics></math> of Balle2018 codecs, adversarially trained with different perceptual metrics.</p>
Full article ">Figure 5
<p>The re-compressed results of Balle2018 + <math display="inline"><semantics> <msub> <mi mathvariant="script">P</mi> <mi>LPIPS</mi> </msub> </semantics></math> SLIC images after various editing operations.</p>
Full article ">Figure 6
<p>The results of the Balle2018 + <math display="inline"><semantics> <msub> <mi mathvariant="script">P</mi> <mi>LPIPS</mi> </msub> </semantics></math> SLIC images were re-compressed after GenAI manipulation: faceswap and stable diffusion inpainting.</p>
Full article ">Figure 7
<p>The failed case: an SLIC-protected face will lose its adversarial perturbations after deepfake re-generation of the victim’s face on the target image.</p>
Full article ">Figure 8
<p>The rate–distortion curve of the SLIC Balle2018 + <math display="inline"><semantics> <msub> <mi mathvariant="script">P</mi> <mi>LPIPS</mi> </msub> </semantics></math> compared with the original codec. They were evaluated on the Kodak dataset.</p>
Full article ">Figure A1
<p>The re-compressed results of Balle2018 + <math display="inline"><semantics> <msub> <mi mathvariant="script">P</mi> <mi>DISTS</mi> </msub> </semantics></math> SLIC images after various editing operations.</p>
Full article ">Figure A2
<p>The re-compressed results of Minnen2018 + <math display="inline"><semantics> <msub> <mi mathvariant="script">P</mi> <mi>LPIPS</mi> </msub> </semantics></math> SLIC images after various editing operations.</p>
Full article ">Figure A3
<p>The re-compressed results of Cheng2018 + <math display="inline"><semantics> <msub> <mi mathvariant="script">P</mi> <mi>LPIPS</mi> </msub> </semantics></math> SLIC images after various editing operations.</p>
Full article ">
Back to TopTop