Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 11, July
Previous Issue
Volume 11, May
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 

Computers, Volume 11, Issue 6 (June 2022) – 17 articles

Cover Story (view full-size image): Simulators allow the easy setup and test of different scenarios that would otherwise be financially costly and difficult to implement on a technical level in a real testbed. Thus, developing specific use cases in a simulation environment is a suitable solution, especially with edge computing (EC) architectures where the number of devices can be considerable. Can we trust the simulations, however? How accurate are these tools? To answer these questions, we implemented the EdgeBench benchmark in the real world and using FogComputingSim. We compared several execution metrics, and overall, the simulated environment successfully reproduced the real-world results, thus allowing us to state that we can trust EC simulations in the first approaches to problems. However, these do not fully replace a real-world implementation. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
15 pages, 4055 KiB  
Article
Computer Vision-Based Inspection System for Worker Training in Build and Construction Industry
by M. Fikret Ercan and Ricky Ben Wang
Computers 2022, 11(6), 100; https://doi.org/10.3390/computers11060100 - 20 Jun 2022
Cited by 2 | Viewed by 2809
Abstract
Recently computer vision has been applied in various fields of engineering successfully ranging from manufacturing to autonomous cars. A key player in this development is the achievements of the latest object detection and classification architectures. In this study, we utilized computer vision and [...] Read more.
Recently computer vision has been applied in various fields of engineering successfully ranging from manufacturing to autonomous cars. A key player in this development is the achievements of the latest object detection and classification architectures. In this study, we utilized computer vision and the latest object detection techniques for an automated assessment system. It is developed to reduce the person-hours involved in worker training assessment. In our local building and construction industry, workers are required to be certificated for their technical skills in order to qualify working in this industry. For the qualification, they are required to go through a training and assessment process. During the assessment, trainees implement an assembly such as electrical wiring and wall-trunking by referring to technical drawings provided. Trainees’ work quality and correctness are then examined by a team of experts manually and visually, which is a time-consuming process. The system described in this paper aims to automate the assessment process to reduce the significant person-hours required during the assessment. We employed computer vision techniques to measure the dimensions, orientation, and position of the wall assembly produced hence speeding up the assessment process. A number of key parts and components are analyzed and their discrepancies from the technical drawing are reported as the assessment result. The performance of the developed system depends on the accurate detection of the wall assembly objects and their corner points. Corner points are used as reference points for the measurements, considering the shape of objects, in this particular application. However, conventional corner detection algorithms are founded upon pixel-based operations and they return many redundant or false corner points. In this study, we employed a hybrid approach using deep learning and conventional corner detection algorithms. Deep learning is employed to detect the whereabouts of objects as well as their reference corner points in the image. We then perform a search within these locations for potential corner points returned from the conventional corner detector algorithm. This approach resulted in highly accurate detection of reference points for measurements and evaluation of the assembly. Full article
(This article belongs to the Special Issue Selected Papers from ICCSA 2021)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Example of poor workmanship and alignment issues. (<b>b</b>) Manual assessment of trainees’ work.</p>
Full article ">Figure 2
<p>Hardware setup and test assemblies used in the experiments.</p>
Full article ">Figure 3
<p>A flow chart of the operations.</p>
Full article ">Figure 4
<p>Detected (<b>a</b>) Type A assembly and (<b>b</b>) Type B assembly.</p>
Full article ">Figure 5
<p>Experiments with corner detection algorithms for both assembly types. (<b>a</b>) Minimum Eigen, (<b>b</b>) Harris, and (<b>c</b>) FAST. Green marks indicate raw corner points returned from algorithms. Parameters such as minimum accepted quality of corners set as 0.01 and filter size as 5 for all the algorithms.</p>
Full article ">Figure 6
<p>Error filtering using trained deep learning networks. Corners detected after error filtering and marked.</p>
Full article ">Figure 7
<p>Template matching for Assembly A. (<b>a</b>) Template of the object fitted on the image. Edges labeled 1 to 6 and conduit length are some of the key measurements (<b>b</b>) Detected reference corner points illustrated on the image for better visualization. (<b>c</b>) A screen capture of the final report generated at the end of assessment.</p>
Full article ">Figure 8
<p>Template matching for Assembly B. (<b>a</b>) A block diagram showing camera and light arrangement. (<b>b</b>) Template of the object fitted on the image. Edges labeled 1 to 4, conduit length and two angles of the bent trunking are the key measurement points (<b>c</b>) Detected reference corner points illustrated on the image for better visualization. (<b>d</b>) A screen capture of the final report generated at the end of assessment.</p>
Full article ">Figure 9
<p>(<b>a</b>) User interface for the inspection system, and (<b>b</b>) manual measurement of points of interest by the user.</p>
Full article ">Figure 10
<p>Examples of measurement errors (<b>a</b>,<b>b</b>). The template drawn deviates from the actual object (<b>c</b>,<b>d</b>) with more severe deviations due to errors caused by image and light quality.</p>
Full article ">
21 pages, 1666 KiB  
Article
Building DeFi Applications Using Cross-Blockchain Interaction on the Wish Swap Platform
by Rita Tsepeleva and Vladimir Korkhov
Computers 2022, 11(6), 99; https://doi.org/10.3390/computers11060099 - 16 Jun 2022
Cited by 9 | Viewed by 3910
Abstract
Blockchain is a developing technology that can provide users with such advantages as decentralization, data security, and transparency of transactions. Blockchain has many applications, one of them is the decentralized finance (DeFi) industry. DeFi is a huge aggregator of various financial blockchain protocols. [...] Read more.
Blockchain is a developing technology that can provide users with such advantages as decentralization, data security, and transparency of transactions. Blockchain has many applications, one of them is the decentralized finance (DeFi) industry. DeFi is a huge aggregator of various financial blockchain protocols. At the moment, the total value locked in these protocols reaches USD 82 billion. Every day more and more new users come to DeFi with their investments. The concept of decentralized finance involves the creation of a single ecosystem of many blockchains that interact with each other. The problem of combining and interacting blockchains becomes crucial to enable DeFi. In this paper, we look at the essence of the DeFi industry, the possibilities of overcoming the problem of cross-blockchain interaction, present our approach to solving this problem with the Wish Swap platform, which, in particular, provides improved fault-tolerance for cross-chain interaction by using multiple backend nodes and multisignatures. We analyze the results of the proposed solution and demonstrate how a prototype pre-sale application can be created based on the proposed concept. Full article
(This article belongs to the Special Issue Selected Papers from ICCSA 2021)
Show Figures

Figure 1

Figure 1
<p>Polkadot ecosystem.</p>
Full article ">Figure 2
<p>Token exchange.</p>
Full article ">Figure 3
<p>Token exchange architecture.</p>
Full article ">Figure 4
<p>Signature composing.</p>
Full article ">Figure 5
<p>Architecture of the cross-chain bridge with multiple backends and multisignatures.</p>
Full article ">Figure 6
<p>Project architecture.</p>
Full article ">Figure 7
<p>Comparison of Wish Swap with other projects.</p>
Full article ">Figure 8
<p>Comparison of proposed solution with other ones.</p>
Full article ">
21 pages, 511 KiB  
Article
A Light Signaling Approach to Node Grouping for Massive MIMO IoT Networks
by Emma Fitzgerald, Michał Pióro, Harsh Tataria, Gilles Callebaut, Sara Gunnarsson and Liesbet Van der Perre
Computers 2022, 11(6), 98; https://doi.org/10.3390/computers11060098 - 16 Jun 2022
Viewed by 1936
Abstract
Massive MIMO is one of the leading technologies for connecting very large numbers of energy-constrained nodes, as it offers both extensive spatial multiplexing and large array gain. A challenge resides in partitioning the many nodes into groups that can communicate simultaneously such that [...] Read more.
Massive MIMO is one of the leading technologies for connecting very large numbers of energy-constrained nodes, as it offers both extensive spatial multiplexing and large array gain. A challenge resides in partitioning the many nodes into groups that can communicate simultaneously such that the mutual interference is minimized. Here we propose node partitioning strategies that do not require full channel state information, but rather are based on nodes’ respective directional channel properties. In our considered scenarios, these typically have a time constant that is far larger than the coherence time of the channel. We developed both an optimal and an approximation algorithm to partition users based on directional channel properties, and evaluated them numerically. Our results show that both algorithms, despite using only these directional channel properties, achieve similar performance in terms of the minimum signal-to-interference-plus-noise ratio for any user, compared with a reference method using full channel knowledge. In particular, we demonstrate that grouping nodes with related directional properties is to be avoided. We hence realize a simple partitioning method, requiring minimal information to be collected from the nodes, and in which this information typically remains stable over the long term, thus promoting the system’s autonomy and energy efficiency. Full article
(This article belongs to the Special Issue Edge Computing for the IoT)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Two node partitioning approaches: “pizza” partitioning (<b>left</b>), in which nodes are grouped such that the nodes in each group have large differences in their directions from the base station, and “onion” partitioning (<b>right</b>), in which nodes with similar signal strength are grouped together. With line-of-sight and free-space propagation, shown here, the former is equivalent to dividing the nodes into angular slices, whereas the latter is equivalent to grouping the nodes into rings based on their distance from the base station.</p>
Full article ">Figure 2
<p>Two example channels with a high and low angular spectrum spread. Here, signal power is normalized such that the total power of each channel across the entire angular spectrum is 1.</p>
Full article ">Figure 3
<p>Example problem instance and solution for the optimization problem. Six nodes’ dominant directions (larger circles) and maximal allowed angular shifts (smaller circles) are shown, placed on a circle around the base station (central black circle). The nodes are partitioned into two groups, red and blue, such that the minimum angular difference between any two nodes is maximized, after allowing nodes to shift to any position within the range given by their maximal allowed angular shifts.</p>
Full article ">Figure 4
<p>SINR performance for different numbers of nodes and partitioning methods. (<b>a</b>) Minimum SINR for different numbers of nodes and node partitioning methods. (<b>b</b>) Average SINR for different numbers of nodes and node partitioning methods. (<b>c</b>) Maximum SINR for different numbers of nodes and node partitioning methods. (<b>d</b>) CDF over the SINR of all nodes for 36-node network instances.</p>
Full article ">Figure 5
<p>Performance of the optimization and approximation algorithms. (<b>a</b>) Objective function value for partitioning with the optimization problem and with the approximation algorithm, for different numbers of nodes. (<b>b</b>) Solution time vs. number of nodes.</p>
Full article ">
19 pages, 2155 KiB  
Article
Assisting Educational Analytics with AutoML Functionalities
by Spyridon Garmpis, Manolis Maragoudakis and Aristogiannis Garmpis
Computers 2022, 11(6), 97; https://doi.org/10.3390/computers11060097 - 15 Jun 2022
Cited by 2 | Viewed by 2776
Abstract
The plethora of changes that have taken place in policy formulations on higher education in recent years in Greece has led to unification, the abolition of departments or technological educational institutions (TEI) and mergers at universities. As a result, many students are required [...] Read more.
The plethora of changes that have taken place in policy formulations on higher education in recent years in Greece has led to unification, the abolition of departments or technological educational institutions (TEI) and mergers at universities. As a result, many students are required to complete their studies in departments of the abolished TEI. Dropout or a delay in graduation is a significant problem that results from newly joined students at the university, in addition to the provision of studies. There are various reasons for this, with student performance during studies being one of the major contributing factors. This study was aimed at predicting the time required for weak students to pass their courses so as to allow the university to develop strategic programs that will help them improve performance and graduate in time. This paper presents various components of educational data mining incorporating a new state-of-the-art strategy, called AutoML, which is used to find the best models and parameters and is capable of predicting the length of time required for students to pass their courses using their past course performance and academic information. A dataset of 23,687 “Computer Networking” module students was used to train and evaluate the classification of a model developed in the KNIME Analytics (open source) data science platform. The accuracy of the model was measured using well-known evaluation criteria, such as precision, recall, and F-measure. The model was applied to data related to three basic courses and correctly predicted approximately 92% of students’ performance and, specifically, students who are likely to drop out or experience a delay before graduating. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Model Training and Testing Processes.</p>
Full article ">Figure 2
<p>AutoML execution after reading the csv file for: IIS, OS and both courses.</p>
Full article ">Figure 3
<p>Cross Validation: Provides a skeleton of nodes necessary for cross-validation.</p>
Full article ">Scheme 1
<p>Missed exams and attempts to pass the IIT course.</p>
Full article ">Scheme 2
<p>Missed exams and attempts to pass the OS course.</p>
Full article ">Scheme 3
<p>Missed exams and attempts to pass both courses, IIT and OS.</p>
Full article ">
29 pages, 11989 KiB  
Article
Accidental Choices—How JVM Choice and Associated Build Tools Affect Interpreter Performance
by Jonathan Lambert, Rosemary Monahan and Kevin Casey
Computers 2022, 11(6), 96; https://doi.org/10.3390/computers11060096 - 14 Jun 2022
Cited by 1 | Viewed by 3067
Abstract
Considering the large number of optimisation techniques that have been integrated into the design of the Java Virtual Machine (JVM) over the last three decades, the Java interpreter continues to persist as a significant bottleneck in the performance of bytecode execution. This paper [...] Read more.
Considering the large number of optimisation techniques that have been integrated into the design of the Java Virtual Machine (JVM) over the last three decades, the Java interpreter continues to persist as a significant bottleneck in the performance of bytecode execution. This paper examines the relationship between Java Runtime Environment (JRE) performance concerning the interpreted execution of Java bytecode and the effect modern compiler selection and integration within the JRE build toolchain has on that performance. We undertook this evaluation relative to a contemporary benchmark suite of application workloads, the Renaissance Benchmark Suite. Our results show that the choice of GNU GCC compiler version used within the JRE build toolchain statistically significantly affects runtime performance. More importantly, not all OpenJDK releases and JRE JVM interpreters are equal. Our results show that OpenJDK JVM interpreter performance is associated with benchmark workload. In addition, in some cases, rolling back to an earlier OpenJDK version and using a more recent GNU GCC compiler within the build toolchain of the JRE can significantly positively impact JRE performance. Full article
(This article belongs to the Special Issue Code Generation, Analysis and Quality Testing)
Show Figures

Figure 1

Figure 1
<p>Raspberry Pi4 Model B board peripheral device layout. Detailing position of Broadcom BCM2711 processor, RAM, USB Type-C power supply, and network ports. Source: [<a href="#B80-computers-11-00096" class="html-bibr">80</a>].</p>
Full article ">Figure 2
<p>Execution time analysis of the 24 Renaissance benchmark suite applications. Each panel depicting the runtime performance for each OpenJDK9 JRE built with the four GNU GCC compiler versions 5, 6, 7, and 8.</p>
Full article ">Figure 3
<p>Execution time analysis of the 24 Renaissance benchmark suite applications. Each panel depicting the runtime performance for each OpenJDK10 JRE built with the four GNU GCC compiler versions 5, 6, 7, and 8.</p>
Full article ">Figure 4
<p>Execution time analysis of the 24 Renaissance benchmark suite applications. Each panel depicting the runtime performance for each OpenJDK11 JRE built with the four GNU GCC compiler versions 5, 6, 7, and 8.</p>
Full article ">Figure 5
<p>Execution time analysis of the 24 Renaissance benchmark suite applications. Each panel depicting the runtime performance for each OpenJDK12 JRE built with the four GNU GCC compiler versions 5, 6, 7, and 8.</p>
Full article ">Figure 6
<p>Execution time analysis of the 24 Renaissance benchmark suite applications. Each panel depicting the runtime performance for each OpenJDK13 JRE built with the four GNU GCC compiler versions 5, 6, 7, and 8.</p>
Full article ">Figure 7
<p>Execution time analysis of the 24 Renaissance benchmark suite applications. Each panel depicting the runtime performance for each OpenJDK14 JRE built with the four GNU GCC compiler versions 5, 6, 7, and 8.</p>
Full article ">Figure 8
<p>Stacked barplots depicting a ranking of each GNU GCC version for each benchmark workload. Each panel depicting the rankings for each OpenJDK JRE release.</p>
Full article ">Figure 9
<p>The distribution of observed performance gain opportunities across the complete set of benchmark applications.</p>
Full article ">
17 pages, 1090 KiB  
Review
Blockchain Technology toward Creating a Smart Local Food Supply Chain
by Jovanka Damoska Sekuloska and Aleksandar Erceg
Computers 2022, 11(6), 95; https://doi.org/10.3390/computers11060095 - 13 Jun 2022
Cited by 25 | Viewed by 6157
Abstract
The primary purpose of the supply chains is to ensure and secure the availability and smooth flow of the necessary resources for efficient production processes and consumption. Supply chain activities have been experiencing significant changes due to the importance and creation of the [...] Read more.
The primary purpose of the supply chains is to ensure and secure the availability and smooth flow of the necessary resources for efficient production processes and consumption. Supply chain activities have been experiencing significant changes due to the importance and creation of the integrated process. Blockchain is viewed as an innovative tool for transforming supply chain management’s (SCM’s) actual business model; on the other hand, the SCM provides an applicative value of blockchain technology. The research is focused on examining the influence of blockchain technology on the increasing efficiency, transparency, auditability, traceability, and security issues of the food supply chain (FSC), with particular attention to the local food supply chain (LFSC). The main objective of the research is to suggest the implementation of blockchain technology in the local food supply chain as a niche of the food industry. The result of the research is the identification of a three-layers model of a smart local food supply chain. The model provides efficient and more transparent tracking across the local food supply chain, improving food accessibility, traceability, and safety. Full article
(This article belongs to the Special Issue Blockchain-Based Systems)
Show Figures

Figure 1

Figure 1
<p>Advantages and disadvantages of using blockchain for supply chain [<a href="#B45-computers-11-00095" class="html-bibr">45</a>].</p>
Full article ">Figure 2
<p>Blockchain in FSCM [<a href="#B76-computers-11-00095" class="html-bibr">76</a>].</p>
Full article ">Figure 3
<p>Three-layer model of blockchain-based FSC (source: authors).</p>
Full article ">Figure 4
<p>Implementation of blockchain technology in LFSC (source: authors).</p>
Full article ">
19 pages, 16212 KiB  
Article
Non-Zero Crossing Point Detection in a Distorted Sinusoidal Signal Using Logistic Regression Model
by Venkataramana Veeramsetty, Srividya Srinivasula and Surender Reddy Salkuti
Computers 2022, 11(6), 94; https://doi.org/10.3390/computers11060094 - 11 Jun 2022
Cited by 2 | Viewed by 2305
Abstract
Non-Zero crossing point detection in a sinusoidal signal is essential in case of various power system and power electronics applications like power system protection and power converters controller design. In this paper 96 data sets are created from a distorted sinusoidal signal based [...] Read more.
Non-Zero crossing point detection in a sinusoidal signal is essential in case of various power system and power electronics applications like power system protection and power converters controller design. In this paper 96 data sets are created from a distorted sinusoidal signal based on MATLAB simulation. Distorted sinusoidal signals are generated in MATLAB with various noise and harmonic levels. In this paper, logistic regression model is used to predict the non-zero crossing point in a distorted signal based on input features like slope, intercept, correlation and RMSE. Logistic regression model is trained and tested in Google Colab environment. As per simulation results, it is observed that logistic regression model is able to predict all non-zero-crossing point in a distorted signal. Full article
(This article belongs to the Special Issue Computing, Electrical and Industrial Systems 2022)
Show Figures

Figure 1

Figure 1
<p>Logistic regression model architecture.</p>
Full article ">Figure 2
<p>Box plot for ZCP-Noise-01.</p>
Full article ">Figure 2 Cont.
<p>Box plot for ZCP-Noise-01.</p>
Full article ">Figure 3
<p>Histogram plot for ZCP-Noise-01.</p>
Full article ">Figure 4
<p>Correlation plot for various datasets.</p>
Full article ">Figure 4 Cont.
<p>Correlation plot for various datasets.</p>
Full article ">Figure 5
<p>Performance of the LGR model on test signal.</p>
Full article ">Figure 6
<p>Comparison of performance of logistic regression model on various distorted signals.</p>
Full article ">Figure 7
<p>Comparison of performance of logistic regression model on harmonic and noise signals.</p>
Full article ">Figure A1
<p>Information about distorted signal with noise.</p>
Full article ">Figure A2
<p>Information about distorted signal with harmonics.</p>
Full article ">Figure A3
<p>Information about distorted signal with noise and harmonics.</p>
Full article ">
17 pages, 2527 KiB  
Article
Automated Detection of Left Bundle Branch Block from ECG Signal Utilizing the Maximal Overlap Discrete Wavelet Transform with ANFIS
by Bassam Al-Naami, Hossam Fraihat, Hamza Abu Owida, Khalid Al-Hamad, Roberto De Fazio and Paolo Visconti
Computers 2022, 11(6), 93; https://doi.org/10.3390/computers11060093 - 10 Jun 2022
Cited by 17 | Viewed by 3093
Abstract
Left bundle branch block (LBBB) is a common disorder in the heart’s electrical conduction system that leads to the ventricles’ uncoordinated contraction. The complete LBBB is usually associated with underlying heart failure and other cardiac diseases. Therefore, early automated detection is vital. This [...] Read more.
Left bundle branch block (LBBB) is a common disorder in the heart’s electrical conduction system that leads to the ventricles’ uncoordinated contraction. The complete LBBB is usually associated with underlying heart failure and other cardiac diseases. Therefore, early automated detection is vital. This work aimed to detect the LBBB through the QRS electrocardiogram (ECG) complex segments taken from the MIT-BIH arrhythmia database. The used data contain 2655 LBBB (abnormal) and 1470 normal signals (i.e., 4125 total signals). The proposed method was employed in the following steps: (i) QRS segmentation and filtration, (ii) application of the Maximal Overlapped Discrete Wavelet Transform (MODWT) on the ECG R wave, (iii) selection of the detailed coefficients of the MODWT (D2, D3, D4), kurtosis, and skewness as extracted features to be fed into the Adaptive Neuro-Fuzzy Inference System (ANFIS) classifier. The obtained results proved that the proposed method performed well based on the achieved sensitivity, specificity, and classification accuracies of 99.81%, 100%, and 99.88%, respectively (F-Score is equal to 0.9990). Our results showed that the proposed method was robust and effective and could be used in real clinical situations. Full article
(This article belongs to the Special Issue Advances of Machine and Deep Learning in the Health Domain)
Show Figures

Figure 1

Figure 1
<p>Block diagram for the proposed method.</p>
Full article ">Figure 2
<p>Processing steps for normal R-R interval (<b>a</b>), after notch filter (<b>b</b>), after BPF (<b>c</b>), and the segmented QRS complex (<b>d</b>).</p>
Full article ">Figure 3
<p>Processing steps for the LBBB R-R interval (<b>a</b>), after notch filter (<b>b</b>), after BPF (<b>c</b>), and the segmented QRS complex (<b>d</b>).</p>
Full article ">Figure 3 Cont.
<p>Processing steps for the LBBB R-R interval (<b>a</b>), after notch filter (<b>b</b>), after BPF (<b>c</b>), and the segmented QRS complex (<b>d</b>).</p>
Full article ">Figure 4
<p>The raw ECG signal (<b>a</b>) was decomposed into the selected detail coefficients of D2 (<b>b</b>), D3 (<b>c</b>), and D4 (<b>d</b>).</p>
Full article ">Figure 5
<p>Training results of ANFIS on 80% for LBBB and normal cases (<b>a</b>) while testing ANFIS on 20% in figure (<b>b</b>).</p>
Full article ">Figure 6
<p>ANFIS output on 825 ECG beats test samples (Class 1 = LBBB, Class 2 = normal).</p>
Full article ">
16 pages, 1330 KiB  
Article
Assessment of Virtual Reality among University Professors: Influence of the Digital Generation
by Álvaro Antón-Sancho, Pablo Fernández-Arias and Diego Vergara
Computers 2022, 11(6), 92; https://doi.org/10.3390/computers11060092 - 10 Jun 2022
Cited by 14 | Viewed by 2891
Abstract
This paper conducts quantitative research on the assessment made by a group of 623 Spanish and Latin American university professors about the use of virtual reality technologies in the classroom and their own digital skills in this respect. The main objective is to [...] Read more.
This paper conducts quantitative research on the assessment made by a group of 623 Spanish and Latin American university professors about the use of virtual reality technologies in the classroom and their own digital skills in this respect. The main objective is to analyze the differences that exist in this regard due to the digital generation of the professors (immigrants or digital natives). As an instrument, a survey designed for this purpose was used, the validity of which has been tested in the study. It was found that digital natives say they are more competent in the use of virtual reality and value its technical and didactic aspects more highly, although they also identify more disadvantages in its use than digital immigrants. Differences in responses were found by gender and areas of knowledge of the professors with respect to the opinions expressed. It is suggested that universities design training plans on teaching digital competence and include in them the didactic use of virtual reality technologies in higher education. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Countries participating in the research.</p>
Full article ">Figure 2
<p>Research variables.</p>
Full article ">Figure 3
<p>Means and deviations of the different questions of the subscales with high standard deviations: (<b>a</b>) competence on the use of VR; (<b>b</b>) future forecast of VR in higher education; and (<b>c</b>) disadvantages of VR.</p>
Full article ">Figure 3 Cont.
<p>Means and deviations of the different questions of the subscales with high standard deviations: (<b>a</b>) competence on the use of VR; (<b>b</b>) future forecast of VR in higher education; and (<b>c</b>) disadvantages of VR.</p>
Full article ">
10 pages, 424 KiB  
Article
Functional Data Analysis for Imaging Mean Function Estimation: Computing Times and Parameter Selection
by Juan A. Arias-López, Carmen Cadarso-Suárez and Pablo Aguiar-Fernández
Computers 2022, 11(6), 91; https://doi.org/10.3390/computers11060091 - 2 Jun 2022
Viewed by 2318
Abstract
In the field of medical imaging, one of the most extended research setups consists of the comparison between two groups of images, a pathological set against a control set, in order to search for statistically significant differences in brain activity. Functional Data Analysis [...] Read more.
In the field of medical imaging, one of the most extended research setups consists of the comparison between two groups of images, a pathological set against a control set, in order to search for statistically significant differences in brain activity. Functional Data Analysis (FDA), a relatively new field of statistics dealing with data expressed in the form of functions, uses methodologies which can be easily extended to the study of imaging data. Examples of this have been proposed in previous publications where the authors settle the mathematical groundwork and properties of the proposed estimators. The methodology herein tested allows for the estimation of mean functions and simultaneous confidence corridors (SCC), also known as simultaneous confidence bands, for imaging data and for the difference between two groups of images. FDA applied to medical imaging presents at least two advantages compared to previous methodologies: it avoids loss of information in complex data structures and avoids the multiple comparison problem arising from traditional pixel-to-pixel comparisons. Nonetheless, computing times for this technique have only been explored in reduced and simulated setups. In the present article, we apply this procedure to a practical case with data extracted from open neuroimaging databases; then, we measure computing times for the construction of Delaunay triangulations and for the computation of mean function and SCC for one-group and two-group approaches. The results suggest that the previous researcher has been too conservative in parameter selection and that computing times for this methodology are reasonable, confirming that this method should be further studied and applied to the field of medical imaging. Full article
(This article belongs to the Special Issue Selected Papers from ICCSA 2021)
Show Figures

Figure 1

Figure 1
<p>Delaunay triangulations produced for this practical case with real brain imaging data. Increasing triangulation’s degree of fineness is measured by parameter <span class="html-italic">N</span>. (<b>a</b>) <span class="html-italic">N</span> = 10. (<b>b</b>) <span class="html-italic">N</span> = 25. (<b>c</b>) <span class="html-italic">N</span> = 50.</p>
Full article ">Figure 2
<p>(<b>a</b>) Scale; (<b>b</b>) Lower SCC; (<b>c</b>) Mean Function; and (<b>d</b>) Upper SCC for brain imaging data. SCCs calculated for <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.05</mn> </mrow> </semantics></math> using Delaunay triangulations (fineness degree <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>).</p>
Full article ">Figure 3
<p>Example of results for a two-sample approach comparing two sets of images: one conformed by control patients and another by pathological (AD) patients. Blue indicates detected hypo-activity while orange indicates hyper-activity. Delaunay triangulations’ fineness degree <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>. (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>. (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.05</mn> </mrow> </semantics></math>. (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Computing times for Delaunay triangulations for complex neuroimaging data structures with growing fineness degree values. Curve fitted with local (LOESS) regression.</p>
Full article ">Figure 5
<p>Computing times for one-group mean function and SCC estimation for neuroimaging data with growing value of triangulation fineness degree. Curve fitted using local (LOESS) regression.</p>
Full article ">Figure 6
<p>Computing times for two-group mean function and SCC estimation for the differences between groups with growing value of triangulation fineness degree. Curve fitted using local (LOESS) regression.</p>
Full article ">
14 pages, 798 KiB  
Article
Can We Trust Edge Computing Simulations? An Experimental Assessment
by Gonçalo Carvalho, Filipe Magalhães, Bruno Cabral, Vasco Pereira and Jorge Bernardino
Computers 2022, 11(6), 90; https://doi.org/10.3390/computers11060090 - 31 May 2022
Cited by 1 | Viewed by 2230
Abstract
Simulators allow for the simulation of real-world environments that would otherwise be financially costly and difficult to implement at a technical level. Thus, a simulation environment facilitates the implementation and development of use cases, rendering such development cost-effective and faster, and it can [...] Read more.
Simulators allow for the simulation of real-world environments that would otherwise be financially costly and difficult to implement at a technical level. Thus, a simulation environment facilitates the implementation and development of use cases, rendering such development cost-effective and faster, and it can be used in several scenarios. There are some works about simulation environments in Edge Computing (EC), but there is a gap of studies that state the validity of these simulators. This paper compares the execution of the EdgeBench benchmark in a real-world environment and in a simulation environment using FogComputingSim, an EC simulator. Overall, the simulated environment was 0.2% faster than the real world, thus allowing for us to state that we can trust EC simulations, and to conclude that it is possible to implement and validate proofs of concept with FogComputingSim. Full article
(This article belongs to the Special Issue Edge Computing for the IoT)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>EdgeBench environmental setup.</p>
Full article ">Figure 2
<p>Benchmark pipeline.</p>
Full article ">Figure 3
<p>EdgeBench metrics (AWS example).</p>
Full article ">Figure 4
<p>Average values for simulated latency (ms) and bandwidth (Mbps).</p>
Full article ">Figure 5
<p>Distributed data flow of applications.</p>
Full article ">Figure 6
<p>Scalar application in the edge (time in ms).</p>
Full article ">Figure 7
<p>Scalar application in the cloud (time in ms).</p>
Full article ">Figure 8
<p>Image application in the edge (time in ms).</p>
Full article ">Figure 9
<p>Image application in the cloud (time in ms).</p>
Full article ">Figure 10
<p>Audio application in the edge (time in ms).</p>
Full article ">Figure 11
<p>Audio application in the cloud (time in ms).</p>
Full article ">
26 pages, 3039 KiB  
Article
Release Planning Patterns for the Automotive Domain
by Kristina Marner, Stefan Wagner and Guenther Ruhe
Computers 2022, 11(6), 89; https://doi.org/10.3390/computers11060089 - 30 May 2022
Cited by 3 | Viewed by 3436
Abstract
Context: Today’s vehicle development is focusing more and more on handling the vast amount of software and hardware inside the vehicle. The resulting planning and development of the software especially confronts original equipment manufacturers (OEMs) with major challenges that have to be mastered. [...] Read more.
Context: Today’s vehicle development is focusing more and more on handling the vast amount of software and hardware inside the vehicle. The resulting planning and development of the software especially confronts original equipment manufacturers (OEMs) with major challenges that have to be mastered. This makes effective and efficient release planning that provides the development scope in the required quality even more important. In addition, the OEMs have to deal with boundary conditions given by the OEM itself and the standards as well as legislation the software and hardware have to conform to. Release planning is a key activity for successfully developing vehicles. Objective: The aim of this work is to introduce release planning patterns to simplify the release planning of software and hardware installed in a vehicle. Method: We followed a pattern identification process that was conducted at Dr. Ing. h. c. F. Porsche AG. Results: We introduce eight release planning patterns, which both address the fixed boundary conditions and structure the actual planning content of a release plan. The patterns address an automotive context and have been developed from a hardware and software point of view based on two examples from the case company. Conclusions: The presented patterns address recurring problems in an automotive context and are based on real life examples. The gathered knowledge can be used for further application in practice and related domains. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Figure 1
<p>Overview of the release planning patterns.</p>
Full article ">Figure 2
<p>Activity diagram for pattern PROJECT SPECIFIC MILESTONES.</p>
Full article ">Figure 3
<p>Example of three different project-specific milestones and how they can be placed in the timeline.</p>
Full article ">Figure 4
<p>Activity diagram for pattern VALIDATION MILESTONES.</p>
Full article ">Figure 5
<p>Example of three different validation milestones and how they can be placed in the timeline.</p>
Full article ">Figure 6
<p>Activity diagram for pattern DELIVERY DATES.</p>
Full article ">Figure 7
<p>Example of delivery dates and how they can be placed in the timeline.</p>
Full article ">Figure 8
<p>Activity diagram for pattern HARDWARE STRUCTURE.</p>
Full article ">Figure 9
<p>Example of a breakdown of the hardware structure into planning objects.</p>
Full article ">Figure 10
<p>Activity diagram for pattern BASIS SOFTWARE STRUCTURE.</p>
Full article ">Figure 11
<p>Example of a breakdown of the basis software structure into planning objects.</p>
Full article ">Figure 12
<p>Activity diagram for pattern SOFTWARE COMPONENTS.</p>
Full article ">Figure 13
<p>Example of a breakdown of the software component into planning objects.</p>
Full article ">Figure 14
<p>Activity diagram for pattern SOFTWARE COMPONENT STRUCTURE.</p>
Full article ">Figure 15
<p>Example of a breakdown of the software component structure into planning objects.</p>
Full article ">Figure 16
<p>Activity diagram for pattern PARTNER FUNCTION STRUCTURE.</p>
Full article ">Figure 17
<p>Example of a breakdown of the partner function structure into planning objects.</p>
Full article ">Figure 18
<p>Visualisation of the presented patterns in an initial release plan.</p>
Full article ">
20 pages, 2062 KiB  
Article
Predicting the Category and the Length of Punishment in Indonesian Courts Based on Previous Court Decision Documents
by Eka Qadri Nuranti, Evi Yulianti and Husna Sarirah Husin
Computers 2022, 11(6), 88; https://doi.org/10.3390/computers11060088 - 30 May 2022
Cited by 6 | Viewed by 3395
Abstract
Among the sources of legal considerations are judges’ previous decisions regarding similar cases that are archived in court decision documents. However, due to the increasing number of court decision documents, it is difficult to find relevant information, such as the category and the [...] Read more.
Among the sources of legal considerations are judges’ previous decisions regarding similar cases that are archived in court decision documents. However, due to the increasing number of court decision documents, it is difficult to find relevant information, such as the category and the length of punishment for similar legal cases. This study presents predictions of first-level judicial decisions by utilizing a collection of Indonesian court decision documents. We propose using multi-level learning, namely, CNN+attention, using decision document sections as features to predict the category and the length of punishment in Indonesian courts. Our results demonstrate that the decision document sections that strongly affected the accuracy of the prediction model were prosecution history, facts, legal facts, and legal considerations. The prediction of the punishment category shows that the CNN+attention model achieved better accuracy than other deep learning models, such as CNN, LSTM, BiLSTM, LSTM+attention, and BiLSTM+attention, by up to 28.18%. The superiority of the CNN+attention model is also shown to predict the punishment length, with the best result being achieved using the ‘year’ time unit. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>An illustration of the CNN+attention method (FM stands for Feature Model, and DM stands for Document Model).</p>
Full article ">Figure 2
<p>The distribution of decision documents in our data set was based on the length of punishment (in ‘day’ units).</p>
Full article ">Figure 3
<p>The results for the ablation analysis of the document section features. The blue color denotes that a corresponding feature specified in the table’s row is used by a model specified in the table’s column. On the contrary, the white color denotes that a corresponding feature specified in the table’s row is not used by a model specified in the table’s column. The model that achieves the highest accuracy is printed in boldface.</p>
Full article ">Figure 4
<p>The confusion matrix for the prediction results generated using the CNN+attention model.</p>
Full article ">Figure 5
<p>The prediction results for the length of punishment using day, month, and year time units. (<b>a</b>) Scatterplot of predicted punishment length using day time unit, (<b>b</b>) Scatterplot of predicted punishment length using month time unit, and (<b>c</b>) predicted punishment length using year time unit.</p>
Full article ">
17 pages, 923 KiB  
Article
The Potential of AR Solutions for Behavioral Learning: A Scoping Review
by Crispino Tosto, Farzin Matin, Luciano Seta, Giuseppe Chiazzese, Antonella Chifari, Marco Arrigo, Davide Taibi, Mariella Farella and Eleni Mangina
Computers 2022, 11(6), 87; https://doi.org/10.3390/computers11060087 - 30 May 2022
Cited by 6 | Viewed by 4130
Abstract
In recent years, educational researchers and practitioners have become increasingly interested in new technologies for teaching and learning, including augmented reality (AR). The literature has already highlighted the benefit of AR in enhancing learners’ outcomes in natural sciences, with a limited number of [...] Read more.
In recent years, educational researchers and practitioners have become increasingly interested in new technologies for teaching and learning, including augmented reality (AR). The literature has already highlighted the benefit of AR in enhancing learners’ outcomes in natural sciences, with a limited number of studies exploring the support of AR in social sciences. Specifically, there have been a number of systematic and scoping reviews in the AR field, but no peer-reviewed review studies on the contribution of AR within interventions aimed at teaching or training behavioral skills have been published to date. In addition, most AR research focuses on technological or development issues. However, limited studies have explored how technology affects social experiences and, in particular, the impact of using AR on social behavior. To address these research gaps, a scoping review was conducted to identify and analyze studies on the use of AR within interventions to teach behavioral skills. These studies were conducted across several intervention settings. In addition to this research question, the review reports an investigation of the literature regarding the impact of AR technology on social behavior. The state of the art of AR solutions designed for interventions in behavioral teaching and learning is presented, with an emphasis on educational and clinical settings. Moreover, some relevant dimensions of the impact of AR on social behavior are discussed in more detail. Limitations of the reviewed AR solutions and implications for future research and development efforts are finally discussed. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>PRISMA protocol for articles relevant to the first research question.</p>
Full article ">Figure 2
<p>PRISMA protocol for articles relevant to the second research question.</p>
Full article ">Figure 3
<p>Number of papers per year and setting of intervention.</p>
Full article ">
21 pages, 5901 KiB  
Article
Energy-Efficient Deterministic Approach for Coverage Hole Detection in Wireless Underground Sensor Network: Mathematical Model and Simulation
by Priyanka Sharma and Rishi Pal Singh
Computers 2022, 11(6), 86; https://doi.org/10.3390/computers11060086 - 26 May 2022
Cited by 3 | Viewed by 2259
Abstract
Wireless underground sensor networks (WUSNs) are being used in agricultural applications, in border patrol, and in the monitoring of remote areas. Coverage holes in WUSNs are an issue which needs to be dealt with. Coverage holes may occur due to random deployment of [...] Read more.
Wireless underground sensor networks (WUSNs) are being used in agricultural applications, in border patrol, and in the monitoring of remote areas. Coverage holes in WUSNs are an issue which needs to be dealt with. Coverage holes may occur due to random deployment of nodes as well as the failure of nodes with time. In this paper, a mathematical approach for hole detection using Delaunay geometry is proposed which divides the network region into Delaunay triangles and applies the laws of triangles to identify coverage holes. WUSNs comprise static nodes, and replacing underground nodes is a complex task. A simplistic algorithm for detecting coverage holes in static WSNs/WUSNs is proposed. The algorithm was simulated in the region of interest for the initially randomly deployed network and after energy depletion of the nodes with time. The performance of the algorithm was evaluated by varying the number of nodes and the sensing radius of the nodes. Our scheme is advantageous over other schemes in the following aspects: (1) it builds a mathematical model and polynomial-time algorithm for detecting holes, and (2) the scheme does not work on centralized computation and therefore provides better scalability, (3) is energy-efficient, and (4) provides a cost-effective solution to detect holes with great accuracy and a low detection time. The algorithm takes less than 0.1 milliseconds to detect holes in a 100 m × 100 m-size network with 100 sensor nodes having a sensing radius of 8 m. The detection time shows only a linear change with an increase in the number of nodes in the network, which makes the algorithm applicable for every network size from small to large. Full article
Show Figures

Figure 1

Figure 1
<p>WUSN with a size of 100 m × 100 m with 100 randomly deployed sensor nodes. The network plot was created using MATLAB software.</p>
Full article ">Figure 2
<p>Delaunay triangulation of sensor nodes deployed in the target region. The Delaunay triangles of the deployed sensor nodes were generated using MATLAB software.</p>
Full article ">Figure 3
<p>Circumcircle around a triangle using ⊥ bisectors: (<b>a</b>) acute triangle; (<b>b</b>) obtuse triangle; (<b>c</b>) right-angle triangle.</p>
Full article ">Figure 4
<p>There is no coverage hole when Rs &gt; R: (<b>a</b>) acute triangle; (<b>b</b>) obtuse triangle; (<b>c</b>) right-angle triangle.</p>
Full article ">Figure 5
<p>Checking if a triangle is obtuse or not, with three cases: (<b>a</b>–<b>c</b>).</p>
Full article ">Figure 6
<p>There is a coverage hole when Rs &lt; R: (<b>a</b>) acute triangle; (<b>b</b>) right-angle triangle.</p>
Full article ">Figure 7
<p>Estimation of a coverage hole in an obtuse triangle when Rs &lt; R: (<b>a</b>) no hole; (<b>b</b>) hole.</p>
Full article ">Figure 8
<p>Obtuse triangle with ⊥ bisectors GE and HF.</p>
Full article ">Figure 9
<p>Coverage hole estimation in an obtuse triangle: (<b>a</b>) no hole when E and F are covered; (<b>b</b>) hole when E and F are not covered.</p>
Full article ">Figure 10
<p>Obtuse triangle with crucial points and sides marked.</p>
Full article ">Figure 11
<p>Hole detection in a randomly deployed network. Red circles show hole centers marked by the algorithm.</p>
Full article ">Figure 12
<p>Increasing holes in the network with time (due to node failure). Failing nodes and increasing holes are shown from (<b>a</b>) to (<b>b</b>) to capture real-time scenarios.</p>
Full article ">Figure 13
<p>Number of rounds vs. holes in the network. The figure depicts no change in the holes for the first 175 rounds and a huge increase in the number of holes when nodes start failing, and again, very few change as communication decreases due to the failing of nodes in the range of each other.</p>
Full article ">Figure 14
<p>Number of detections vs. detection time. The detection time is less than 0.1 millisecond and decreases with the decrease in the number of nodes (nodes failing with time) in the network.</p>
Full article ">Figure 15
<p>Number of rounds vs. residual energy. Residual energy is measured after every fifty rounds.</p>
Full article ">Figure 16
<p>Number of nodes vs. detection time. The hole detection time shows a linear increase with an increase in the number of nodes, proving the scalability of the algorithm.</p>
Full article ">
18 pages, 837 KiB  
Article
Improved Bidirectional GAN-Based Approach for Network Intrusion Detection Using One-Class Classifier
by Wen Xu, Julian Jang-Jaccard, Tong Liu, Fariza Sabrina and Jin Kwak
Computers 2022, 11(6), 85; https://doi.org/10.3390/computers11060085 - 26 May 2022
Cited by 20 | Viewed by 4078
Abstract
Existing generative adversarial networks (GANs), primarily used for creating fake image samples from natural images, demand a strong dependence (i.e., the training strategy of the generators and the discriminators require to be in sync) for the generators to produce as realistic fake samples [...] Read more.
Existing generative adversarial networks (GANs), primarily used for creating fake image samples from natural images, demand a strong dependence (i.e., the training strategy of the generators and the discriminators require to be in sync) for the generators to produce as realistic fake samples that can “fool” the discriminators. We argue that this strong dependency required for GAN training on images does not necessarily work for GAN models for network intrusion detection tasks. This is because the network intrusion inputs have a simpler feature structure such as relatively low-dimension, discrete feature values, and smaller input size compared to the existing GAN-based anomaly detection tasks proposed on images. To address this issue, we propose a new Bidirectional GAN (Bi-GAN) model that is better equipped for network intrusion detection with reduced overheads involved in excessive training. In our proposed method, the training iteration of the generator (and accordingly the encoder) is increased separate from the training of the discriminator until it satisfies the condition associated with the cross-entropy loss. Our empirical results show that this proposed training strategy greatly improves the performance of both the generator and the discriminator even in the presence of imbalanced classes. In addition, our model offers a new construct of a one-class classifier using the trained encoder–discriminator. The one-class classifier detects anomalous network traffic based on binary classification results instead of calculating expensive and complex anomaly scores (or thresholds). Our experimental result illustrates that our proposed method is highly effective to be used in network intrusion detection tasks and outperforms other similar generative methods on two datasets: NSL-KDD and CIC-DDoS2019 datasets. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Figure 1
<p>Structure of GAN. The generator G map the input <span class="html-italic">z</span> (i.e., random noise) in latent space to produce a high dimensional <math display="inline"><semantics> <mrow> <mi>G</mi> <mo>(</mo> <mi>z</mi> <mo>)</mo> </mrow> </semantics></math> (i.e., fake samples). The discriminator D is expected to separate <span class="html-italic">x</span> (i.e., real samples) from <math display="inline"><semantics> <mrow> <mi>G</mi> <mo>(</mo> <mi>z</mi> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>Structure of BiGAN. Note that (<span class="html-italic">z</span> and <math display="inline"><semantics> <mrow> <mi>E</mi> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </semantics></math>) and (<math display="inline"><semantics> <mrow> <mi>G</mi> <mo>(</mo> <mi>z</mi> <mo>)</mo> </mrow> </semantics></math> and <span class="html-italic">x</span>) have the same dimensions. The concatenated pairs <math display="inline"><semantics> <mrow> <mo>[</mo> <mi>G</mi> <mo>(</mo> <mi>z</mi> <mo>)</mo> <mo>,</mo> <mi>z</mi> <mo>]</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mo>[</mo> <mi>x</mi> <mo>,</mo> <mi>E</mi> <mo>(</mo> <mi>x</mi> <mo>)</mo> <mo>]</mo> </mrow> </semantics></math> are the two input sources of the <span class="html-italic">discriminator</span> D. The <span class="html-italic">Generator</span> G and the <span class="html-italic">encoder</span> E are optimized with the loss generated by the <span class="html-italic">discriminator</span> D.</p>
Full article ">Figure 3
<p>Flowchart of our proposed approach.</p>
Full article ">Figure 4
<p>BiGAN data flow. <span class="html-italic">Encoder</span>: input dimensions (122), output dimensions (10); <span class="html-italic">Generator</span>: input dimensions (10), output dimensions (122); <span class="html-italic">Discriminator</span> concatenates the input and output of <span class="html-italic">Encoder</span> or <span class="html-italic">Generator</span> to form the input and functions as a binary classifier.</p>
Full article ">Figure 5
<p>Training losses vs. iterations. Eloss, Gloss, and Dloss represent the training loss trend of encoder, generator, and discriminator, respectively.</p>
Full article ">Figure 6
<p>The PCA visualization of data distribution in (<b>a</b>) KDDTrain+ and (<b>b</b>) KDDTest+ dataset.</p>
Full article ">Figure 7
<p>The PCA visualization of concatenated outputs of the encoder and the generator after training on (<b>a</b>) NSL-KDD and (<b>b</b>) CIC-DDoS2019.</p>
Full article ">Figure 8
<p>Confusion matrix result of (<b>a</b>) NSL-KDD and (<b>b</b>) CIC-DDoS2019.</p>
Full article ">Figure 9
<p>AUC_ROC curve of our proposed model on (<b>a</b>) NSL-KDD and (<b>b</b>) CIC-DDoS2019.</p>
Full article ">
11 pages, 8125 KiB  
Article
Improving Multi-View Camera Calibration Using Precise Location of Sphere Center Projection
by Alberto J. Perez, Javier Perez-Soler, Juan-Carlos Perez-Cortes and Jose-Luis Guardiola
Computers 2022, 11(6), 84; https://doi.org/10.3390/computers11060084 - 24 May 2022
Cited by 1 | Viewed by 2507
Abstract
Several calibration algorithms use spheres as calibration tokens because of the simplicity and uniform shape that a sphere presents across multiple views, along with the simplicity of its construction. Among the alternatives are the use of complex 3D tokens with reference marks, usually [...] Read more.
Several calibration algorithms use spheres as calibration tokens because of the simplicity and uniform shape that a sphere presents across multiple views, along with the simplicity of its construction. Among the alternatives are the use of complex 3D tokens with reference marks, usually complex to build and analyze with the required accuracy; or the search of common features in scene images, with this task being of high complexity too due to perspective changes. Some of the algorithms using spheres rely on the estimation of the sphere center projection obtained from the camera images to proceed. The computation of these projection points from the sphere silhouette on the images is not straightforward because it does not match exactly the silhouette centroid. Thus, several methods have been developed to cope with this calculation. In this work, a simple and fast numerical method adapted to precisely compute the sphere center projection for these algorithms is presented. The benefits over other similar existing methods are its ease of implementation and that it presents less sensibility to segmentation issues. Other possible applications of the proposed method are presented too. Full article
Show Figures

Figure 1

Figure 1
<p>A sphere of center <math display="inline"><semantics> <msub> <mi>C</mi> <mi>s</mi> </msub> </semantics></math> is projected on the camera plane as an ellipse. The projection of its center, <span class="html-italic">C</span>, does not lie exactly on the ellipse center, <math display="inline"><semantics> <msup> <mi>C</mi> <mo>′</mo> </msup> </semantics></math>. <span class="html-italic">F</span> represents the camera focal length, <math display="inline"><semantics> <msub> <mi>O</mi> <mi>c</mi> </msub> </semantics></math> is the camera position and <math display="inline"><semantics> <mover accent="true"> <msub> <mi>u</mi> <mi>c</mi> </msub> <mo>→</mo> </mover> </semantics></math> is the camera orientation. <span class="html-italic">O</span>, principal point of camera image.</p>
Full article ">Figure 2
<p>Point <math display="inline"><semantics> <msup> <mi>C</mi> <mo>′</mo> </msup> </semantics></math> can be computed as the mean of points <span class="html-italic">a</span> and <span class="html-italic">b</span>.</p>
Full article ">Figure 3
<p>Triangles <math display="inline"><semantics> <mrow> <msub> <mi>O</mi> <mi>c</mi> </msub> <msub> <mi>C</mi> <mi>s</mi> </msub> <mi>P</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>Q</mi> <msub> <mi>C</mi> <mi>s</mi> </msub> <mi>P</mi> </mrow> </semantics></math> and triangles <math display="inline"><semantics> <mrow> <msub> <mi>O</mi> <mi>c</mi> </msub> <mi>Q</mi> <mi>P</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>Q</mi> <msub> <mi>C</mi> <mi>s</mi> </msub> <mi>P</mi> </mrow> </semantics></math> are similar.</p>
Full article ">Figure 4
<p>Iterative process for computing <span class="html-italic">C</span>. In every iteration (from left to right), a new estimation of <math display="inline"><semantics> <msubsup> <mi>α</mi> <mi>i</mi> <mo>′</mo> </msubsup> </semantics></math> and <math display="inline"><semantics> <msub> <mi>C</mi> <mi>i</mi> </msub> </semantics></math> is computed. The unitary vector <math display="inline"><semantics> <mover accent="true"> <msub> <mi>u</mi> <mi>e</mi> </msub> <mo>→</mo> </mover> </semantics></math> define the line containing <math display="inline"><semantics> <msup> <mi>C</mi> <mo>′</mo> </msup> </semantics></math>, <span class="html-italic">C</span> and <span class="html-italic">O</span>.</p>
Full article ">Figure 5
<p>ZG3D capture device. Camera distribution (<b>left</b>). Actual device (<b>right</b>). The objects are captured as they fall through the device.</p>
Full article ">Figure 6
<p>The calibration token consisted of two spheres of different diameters connected by a narrow cylinder (<b>left</b>). The center projections of both spheres were located on the images (<b>right</b>).</p>
Full article ">Figure 7
<p>An example of capture. On each capture, 16 images were obtained in the device.</p>
Full article ">Figure 8
<p>Distribution error of sphere center calculations for both calibration token spheres. The graph shows the estimation error before and after applying the proposed sphere center correction algorithm with the context parameters.</p>
Full article ">Figure 9
<p>The error between the blob center and the sphere center projection with data: <math display="inline"><semantics> <mrow> <mi>W</mi> <mo>=</mo> <mn>550</mn> </mrow> </semantics></math> mm, <math display="inline"><semantics> <mrow> <mi>F</mi> <mo>=</mo> <mn>25</mn> </mrow> </semantics></math> mm, sphere diameter <math display="inline"><semantics> <mrow> <mi>D</mi> <mo>=</mo> <mn>26.1</mn> </mrow> </semantics></math> mm (<b>top</b>) and <math display="inline"><semantics> <mrow> <mi>D</mi> <mo>=</mo> <mn>43.5</mn> </mrow> </semantics></math> mm (<b>bottom</b>) on an image (<math display="inline"><semantics> <mrow> <mn>2448</mn> <mo>×</mo> <mn>2048</mn> </mrow> </semantics></math>, pixel size = <math display="inline"><semantics> <mrow> <mn>3.45</mn> <mspace width="3.33333pt"/> <mi mathvariant="sans-serif">μ</mi> </mrow> </semantics></math>m). The bottom of the curve represents a null error.</p>
Full article ">Figure 10
<p>Maximal error on the image (<math display="inline"><semantics> <mrow> <mn>2448</mn> <mo>×</mo> <mn>2048</mn> </mrow> </semantics></math> pixels, <math display="inline"><semantics> <mrow> <mn>3.45</mn> <mspace width="3.33333pt"/> <mi mathvariant="sans-serif">μ</mi> </mrow> </semantics></math>m pixel size) for different values of the work distance (<span class="html-italic">W</span>), focal length (<span class="html-italic">F</span>) and the sphere diameter (<span class="html-italic">D</span>).</p>
Full article ">Figure 11
<p>Each <math display="inline"><semantics> <msubsup> <mi>C</mi> <mi>i</mi> <mo>′</mo> </msubsup> </semantics></math> point is the computed center projection of a target sphere captured with camera <span class="html-italic">i</span>; an epipolar line <math display="inline"><semantics> <msub> <mi>e</mi> <mi>i</mi> </msub> </semantics></math> can be obtained for each point–camera pair. With the epilopar lines, the point <math display="inline"><semantics> <msup> <mi>p</mi> <mo>′</mo> </msup> </semantics></math> (the estimated sphere center in the 3D space) can be obtained by triangulation. A more precise estimation of point <math display="inline"><semantics> <msubsup> <mi>C</mi> <mi>i</mi> <mo>′</mo> </msubsup> </semantics></math> implies a more precise <math display="inline"><semantics> <msup> <mi>p</mi> <mo>′</mo> </msup> </semantics></math> estimation.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop