Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Experimental Investigation on the Use of Waste Elastomeric Polymers for Bitumen Modification
Previous Article in Journal
Design Optimization for the Thin-Walled Joint Thread of a Coring Tool Used for Deep Boreholes
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Machine Learning Approach in Heterogeneous Group of Algorithms for Transport Safety-Critical System

1
College of Business, Hankuk University of Foreign Studies, Seoul 02450, Korea
2
Financial University under the Government of the Russian Federation, 124167 Moscow, Russia
3
Solbridge International School of Business, Daejeon 34613, Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(8), 2670; https://doi.org/10.3390/app10082670
Submission received: 27 February 2020 / Revised: 27 March 2020 / Accepted: 5 April 2020 / Published: 13 April 2020
(This article belongs to the Section Mechanical Engineering)

Abstract

:
This article presents a machine learning approach in a heterogeneous group of algorithms in a transport type model for the optimal distribution of tasks in safety-critical systems (SCS). Applied systems in the working area identify the determination of their parameters. Accordingly, in this article, machine learning models are implemented on various subsets of our transformed data and repeatedly calculated the bounds for 90 percent tolerance intervals, each time noting whether or not they contained the actual value of X. This approach considers the features of algorithms for solving such important classes of problem management as the allocation of limited resources in multi-agent SCS and their most important properties. Modeling for the error was normally distributed. The results are obtained, including the situation requiring solutions, recorded and a sample is made out of the observations. This paper summarizes the literature review on the machine learning approach into new implication research. The empirical research shows the effect of the optimal algorithm for transport safety-critical systems.

1. Introduction

An example of a transport model distribution of tasks in a heterogeneous group of algorithms is one without operator participation, which is not what we propose. It is assumed that the model is trained in a landfill emergency situation in which algorithms are to perform corresponding operations. Training can be carried out and in the process of the regular operation of algorithms.
This paper summarizes the literature review on the machine learning approach in safety-critical systems into new implication research. Empirical research shows the effect of the machine learning approach on the efficiency of safety-critical systems. This research makes at least two important contributions to the body of knowledge. The first contribution is the method of the machine learning approach in the heterogeneous group of algorithms, particularly with regard to long-term forecasts. The second contribution is that the paper found the trends in the development of transport safety-critical systems.
We will assume that a person, directly or indirectly, participates in the work of the safety-critical system (SCS), which is actually a part of it. In most applications, the SCS is ideal in the situation when an individual algorithm or group of algorithms performs all tasks assigned to them offline. However, this ideal will not always be achieved, because of the peculiarities of the clause sphere, the uncertainty of the environment and limited SCS.
In modern literature, the following research papers with machine learning algorithms in safety-critical systems have been published: Neural Networks, Ensemble Learning and Lazy Learning. These manuscripts research ensemble methods for solving problems in safety-critical systems using heterogeneous ensembles.
This research makes at least three points of novelty. The first is the method of the machine learning approach in the heterogeneous group of algorithms, particularly with regard to long-term forecasts. The second is achieved through empirical testing; determining a mediating role to prove that the efficiency of safety-critical systems is currently not so high, which remains a debate to previous researchers. The third is that our research found the trends in the development of transport safety-critical systems.
Systems that can cause serious injury or damage to people, equipment or the environment in case of failure or malfunction are defined as safety-critical systems. It is crucial that these systems function properly, as they are some of the most important systems for safe work in transport [1]. However, the control and safety-critical functions in transportation need to evolve to keep up with the increase of voltage [2].
The braking moment is not well-defined in studies. However, failures in brake pressure measurements can be caused by software or hardware problems, which leads to critical problems of a vehicle’s safety [3].

2. Literature Review

The problem of the heterogeneous group of algorithms for transport safety-critical systems was observed in the latest literature. Some researchers [4] have shown that an estimation algorithm based on an advanced Kalman filter can be used for fast online monitoring of hydraulic brake pressure. An estimation algorithm was performed by calculating the volume of fluid flowing through the valve. However, existing studies on the assessment of brake pressure mainly focused on control technologies, and an approach with a probabilistic method, such as machine learning, was rarely even proposed [5].
For assemblies to be more accurate than any of their individual pieces, the basic models must be as diverse as possible. In other words, the assembly becomes more accurate as more information comes from basic classifications [6,7]. Some assembly methods do not include an integrator. However, those methods do use one of three types of unifiers [8].
This approach, which uses a unifier, is defined by the “complex generalization” set for a metaclassifier. This is an important problem of a unifier [9,10]. The change of the assembly structure reduces the complexity of the problem, so it seems that solutions have been developed [11,12].
There are a lot of SCS autonomy issues. This approach considers the features of algorithms for solving such important classes of problem management as the allocation of limited resources in multi-agent SCS (MASCS) and their most important properties [13,14,15,16,17,18]. The context of the work of MASCS will be adopted to conduct a search rescue and liquidation work on chemically hazardous facilities. In this case, MASCS will have supervisory control over the distribution [19,20]. The letter problem is solved by the control center. The purpose of using MASCS is to increase the level of personnel safety in chemically hazardous facilities [21,22,23].
The algorithmic system includes a control center, a team of algorithm agents with different specializations and aids body equipment (measuring and/or actuating). The manager of the center solves distribution tasks, forms management teams collectively as a whole and processes the information received by algorithms in the process of performing tasks [24,25,26,27].

3. Materials and Methods

The SCS models are as follows:
  • The SCS should effectively solve the distribution problem with some regular or random periodicity [28].
  • The external environment of SCS creates situations that require adoption, which in this context means the need to assign tasks between algorithms. Under specific conditions, the initiator of the assignment may be SCS itself [29]. An example is when the charge level of an energy source has reached a critical threshold [30].
  • The efficiency of the SCS cannot be set a priori for the whole planned period of the RTS when the task at hand is the only scalar indicator [31,32] and the entire required set of performance indicators cannot be identified. Flax and formalized a priori, i.e., at the stage of design, adjustment, preparation items to perform the task [33,34].
  • The control environment may be non-stationary, i.e., in the process of one completing a task, its restrictions might change, along with the composition-controlled variables and target preferences of decision makers [35,36].
MASCS consists of algorithms, each of which can complete one or more tasks from their specified list of task types [37]. The sequence diagram of the scope of work assigned to MARTS is divided into individual tasks (Figure 1). To perform each task (i = 1,m), we need i algorithms, where c generalized (integral) costs and performed m algorithm (j = 1,m) tasks.
The costs are similar to vectors in their nature. They may include the time of operation, the flow of energy or any other resource. Their integration implies the possibility of some convolution into a single scalar exponent [31]. The tasks solved by algorithms in this context, for example, are moving from the current point to a specified point, the execution in any given point of an operation, the uniform distribution of algorithms in a certain zone and the movement of an object from one point to another [24].
These types of tasks are structurally represented by problems of assignment in a number of research papers [25]. Consider a more general case such as transport problems (TK) because the assignment problem is a special case concerning it [26].
Refinement of elements of a reliable matrix occurs when solving an inverse transportation problem, the algorithms of which implement the adaptation mechanism (feedback), allowing the maintenance of the actual target function of the SCS and adequate current target preferences of the decision maker. Thus, the sequence of the above four steps is an initiative procedure that decides camping tasks. It remains only to perform the distribution of tasks by solving tasks.
We present the formulation of direct and inverse tasks [1] in the form:
L ( x ) = i = 1 m j = 1 n C i j X i j
Decision making situations are determined by a combination of two vectors, which meet the following limitations for balanced task:
j = 1 n X i j = A j
i = 1 m X i j = B j
where Xij is greater than 0, i = 1, …, m, j = 1, …, m.
X o p t = a r g x m i n L ( x )
Thus, Relations (1)–(4) are the statement direct task, implemented in Claim 1 of the algorithm, the result of which is the minimum plan for the distribution of tasks for algorithms. Here, we believe that all elements at various points of planning are measurable. The payment matrix is one of the elements of the task that requires clarification by solving the inverse task. For the convenience of solving tasks, one makes a number of transformations that can bring Problems (1)–(4) to form, which is more convenient for analysis and implementation. To do this, the expression of basic variables goes through the free variables:
X 11 = a 1 i = 2 n B j i = 2 m j = 2 n X i j
X i 1 = a 1 j = 2 n X i j
i = 2 , m
X i j = a 1 i = 2 m X i j
j = 2 , m
As a result, a smaller dimension problem will be isolated, in which one does not need to look for the entire matrix, but its block (it includes all the elements of the matrix X, except for the first row and the first column), which has the form of a task with inequality constraints:
L ( x ) = i = 1 m j = 1 n R i j X i j
R i j = C 11 C i 1 C 1 j + C i j
i = 2 n B j a 1 i = 2 m j = 2 n X i j
less than or equal to 0
j = 1 n X i j A j
less than or equal to 0
i = 2 , m
i = 1 m X i j B j
less than or equal to 0
j = 2 , m
X o p t = a r g X m i n L ( X )
Having solved Relations (10)–(17), we find variables which should be calculated by Formulas (5)–(9), which will give a complete result of the original task. To solve Relations (10)–(17), we can use any standard method, like [2,3]. Thus, the above formulation and transformation of the task provides an opportunity to perform all operations of optimal distribution. Adequacy of the payment matrix’s current preferences of decision makers’ solution is provided [4,5].
It allows any new observation to calculate new values of the estimates of the elements of the matrix R. This calculation is as follows:
R = ( i = 1 m j = 1 n ( t = 1 K B t e i j t ) 2 ) 1 t = 1 K B t e i j t
where e is the normal vector coordinates in the vector (matrix) of estimates and B is weights, calculated as the length of the observation vector before its normalization.

4. Results

The decision maker of each cyclogram solves the distribution problem, based on previous experience. However, by virtue of the subjective idea of integral costs for each pair of algorithm tasks, the coefficients of the transport tables that are taken into account are measured with an error. Modeling for the error was set normally distributed. The results are obtained, as well as the situation requiring solutions, recorded and a sample is made out of the observations. The simulated payment matrix is shown in Table 1.
Second estimates were obtained as a result of solving the inverse problem at each step of the observations, along with calculating the estimates of the elements of the payment matrix (Figure 2), as well as the normalized (reduced to unit length) magnitude of the difference of the vector of estimates and the actual vector of the elements of the model transport table (Figure 3).
Analyzing the plot of convergence of the residual, it becomes clear that even an insignificant error by the decision maker in measurements of target preferences (elements of transport tables) leads to fairly rapid machine learning of the transport model. Even the proposed algorithm for the machine learning model leads to a rapid convergence of estimates. This is shown by the simulation studies of the generalized adaptation of the payment matrix algorithm of the transport model (transport table) to the real preferences of the decision maker.
We tested our approach experimentally on the Thomson Reuters dataset which contained 851 records and nine numeric attributes including the target one. We chose this smaller dataset because of our data transformation (with a 15-fold repetition of experiments in order to make them more representative), which took a long time to compute. We used a feedforward machine learning approach with one hidden layer and a sigmoid activation function. Table 2 shows some prediction examples for this dataset as well as the corresponding threshold-crossing probabilities and 90 percent tolerance intervals.
Approach of heterogeneous group of algorithms is below:
Input:
Dataset D = {(x1, y1),(x2, y2), …, (xm, ym)}; First-level learning algorithms L1, L2, …, Ln;
Second-level learning algorithm L;
Process:
%Train a first-level individual learner ht by applying the first-level learning algorithm Lt to the original dataset D for t = 1, …,
T: ht = Lt(D)
end; % generate a new data set D0 = φ; for i = 1, …, m: for t = 1, …, T: zit = hi (xi ) % Use ht to predict training example xi
end; D0 = D0 ∪ {((zI1, zi2, …, zT), yi)}
\end; % Train the second-level learner h0 by applying the second-level learning algorithm L to the new data set D0 h0 = L(D0).
Output: H(x) = h0(h1(x1), …, hT(xT))
For K = |X|, it seems logical to perform partitions in such a way that each part has an equal number of objects, and also that for each fixed class, the share of objects in each of the K parts is the same. This method allows metaclassifier M to use the entire sample x for training. However, it has the following drawback: the obtained distributions of metacognition values for the training MF (X, A) and the validation sample MF (X0, A) are often different due to the fact that the metacognition for the validation sample was obtained by averaging K predictions. Obviously, this leads to a deterioration in the quality of the final prediction.
Three linear functions participate in stacking: Ridge, Lasso and SVR. This approach is shown in Figure 4 and Figure 5 in detail.
We performed this particular experiment only once because our goal was just to illustrate the process for a few specific target values. The table lists the final estimate X calculated as an average of 20 “preliminary” estimates produced by the neural network as well as the standard deviation sX of these preliminary estimates. Furthermore, the threshold-crossing probabilities Pr (X < 8.0) and Pr (X > 40.0) and the corresponding 90 percent tolerance interval are shown in the table. The real value of X in the last column (XREAL) is shown just for verification—it was not necessary to the crucial system models. By comparing the individual rows in Table 2 we can see the effect of the average change of X on the threshold-crossing probabilities, which increase or decrease sharply as X enters or leaves each guarded region.
For example, as the average X changes from 36.9891 to 39.7402 between rows six and seven, the probability Pr (X > 40.0) increases more than tenfold from 0.033197 to 0.394970. Similarly, as X changes from 8.9423 to 10.4178 between rows two and three, Pr (X < 8.0) sharply decreases from 0.421507 to 0.032779. Another important fact can be seen in rows four and five—a surprisingly large influence of the standard deviation sX on the threshold-crossing probabilities. Both rows share a very similar value of X yet their threshold-crossing probabilities for both thresholds differ by several orders of magnitude. This is caused by the difference in the deviation sX, since the higher the sX, the higher the probability of exceeding the threshold.
In order to cross-check the soundness of our calculations, we have substituted the bounds of our 90 percent tolerance intervals from the sixth column, expecting to get the same result (0.9) for each row. The values that we obtained were indeed very close to 0.9—if we denote the error as Ɛ, our results were 0.9 + Ɛ, with the maximum absolute value of the error |ƐMAX | < 2.6 × 10−13, which we interpret as the confirmation of sufficient precision in our calculations. As can be seen from our derivations above, the task of predicting the probability of exceeding a predefined threshold is inverse to that of estimating the bounds of an interval within which the target value should reside with a given probability. Both exploit the same mathematical relationship between the bounds and the probability, but they approach it from the opposite sides. This entitled us to verify the correctness of our technique by testing the validity of our 0.9 tolerance intervals, which is much easier to do than to verify the inverse task.
Accordingly, we have trained machine learning models on various subsets of our transformed data and repeatedly calculated the bounds for 90 percent tolerance intervals, each time also noting whether or not they contained the actual value of X (which was known to us but hidden from the regression models, Table 3 and Table 4). Table 3 lists averaged values for model precision, elapsed calculation time and the success rate of interval estimates for each size of the training set (specified in the NumRec column). Each row in the table shows the values averaged over 15 independent experiment runs with different random seeds. The seeds governed the inclusion of the records in the training set as well as the random initialization of neural networks generating the models.
The average precision of our trained machine learning model is expressed through Correlation Coefficients (CorrCoef column) and Root Mean Squared Error (RMSE column). The Time column shows the average time in seconds needed for one modeling cycle, i.e., for training the model on NumRec randomly selected records in the training set and then predicting the target value for the remaining records in the dataset. The Intervals column shows the total number of interval estimates performed and column Correct the number of those that did encompass the real value XREAL.
The Ratio column then shows their success rate, which is defined as the ratio of intervals. Since these estimates were meant to represent 0.9 tolerance intervals, the values in the Ratio column should be no less than 0.90. We can see that it is indeed so for all the rows except the last two, where the small size of the training set (<100 records) negatively impacted the success rate as well as precision (lowering the Correlation Coefficients and increasing the Root Mean Squared Errors). Nevertheless, we consider the experimental evaluation a success, because, for sufficiently representative training sets (>100 records), the success rate of interval estimates reached or crossed 0.9, as expected. In fact, for training sets with 120 records or more, the success rate consistently exceeded 0.9. We thus feel entitled to conclude that the proposed algorithm can be practically deployed in various experimental scenarios. In the future, we plan to test our approach on other datasets and conduct an in-depth analysis of experimental results. We also intend to investigate the lower bound for the training set size below which the interval estimates fail to reach the expected success rate [2,3].

5. Discussion

We introduced a new algorithm that uses special data transformation for critical system problems in this manuscript. The developed approach is compared with other methods of machine learning, including a regression decision tree. In addition to R2 and RMSE, two other estimation parameters, i.e., training time and testing time, are also used to evaluate the performance of different models.
According to the results, heterogeneous groups of algorithms give much faster testing speeds compared to other algorithms. It can be reached by use of a successful sampling method, when exactly the same number of objects are selected as they were originally selected. However, these objects are selected with repetitions. In other words, the selected random object is returned and can be selected again. The number of objects that will be selected will be approximately 63% of the original sample, and the remaining percentage of objects (approximately 37%) will never be included in the training sample. This generated sampled sample is used to train basic algorithms (in our case, decision trees). This also happens randomly: random subsets (samples) of a given length are taken and trained on the selected random subset of features (attributes).
The remaining 37% of the sample is used to test the generalization ability of the constructed model. Then, all the trained trees are combined into a composition using a simple vote, using the average error for all samples. As a result of using bootstrap aggregating, the average square of the error decreases, and the variance of the trained classifier decreases. The error will not differ so much in different samples. As a result, the model, according to the authors, will be less retrained. The effectiveness of bagging lies in the fact that the basic algorithms (decision trees) are trained from various random subsamples and their results may differ greatly, but their errors are mutually compensated for when voting. The method is configured so that you can build a composition as quickly as possible from large data samples. Each tree is built in a specific way. The attribute for building a tree node is selected not from the total number of features, but from a random subset of them. If we build a regression model, the number of features is n/3. In the case of classification, this is √n. All this is an empirical recommendation and is called decorrelation: different sets of features fall into different trees, and the trees are trained on different samples.
From the point of view of real-time applications, tree regression solutions can be good candidates because of their simplicity and high performance efficiency. We proved the ideas of researchers [14,18]. A novel method of evaluating heterogeneous groups of estimation algorithms was developed in this article, which can be used to estimate the brake pressure. Real vehicle testing is carried out on a dynamometer bench [19,20].
Further work can be carried out in the following areas: the proposed algorithm can be refined using on-board testing. Intelligent braking control algorithms can be developed based on an estimate of the state of the road surface. Normalized error from the 15th step does not exceed 10 percent like in previous researches [1,22].

6. Conclusions

This paper summarizes the literature review on the machine learning approach in safety-critical systems into new implication research. Empirical research shows the effect of the machine learning approach on the efficiency of safety-critical systems. This research makes at least two important contributions to the body of knowledge. The first contribution is the method of the machine learning approach in the heterogeneous group of algorithms, particularly with regard to long-term forecasts. The second contribution is that the paper found the trends in the development of transport safety-critical systems.
This research has limitations. The first one is the use of the approach tested only in the heterogeneous group of algorithms for transport safety-critical systems. Implementation in other industries can lead to different results. The second limitation is the source of data. We tested our approach experimentally on the Thomson Reuters dataset which contained 851 records and nine numeric attributes including the target one. We chose this smaller dataset because our data transformation (with 15-fold repetition of experiments in order to make them more representative), took a long time to compute. We used a feedforward machine learning approach with one hidden layer and a sigmoid activation function.

Author Contributions

Conceptualization, J.A.; software, A.M.; validation, A.M.; formal analysis, K.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The first author was supported by the Hankuk University of Foreign Studies Research Fund.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Arenas-Garcia, J.; Meng, A.; Petersen, K.B.; Lehn-Schioler, T.; Hansen, L.K.; Larsen, J. Unveiling Music Structure via PLSA Similarity Fusion. In Proceedings of the 2006 16th IEEE Signal Processing Society Workshop on Machine Learning for Signal Processing, Maynooth, Ireland, 6–8 September 2006; pp. 419–424. [Google Scholar]
  2. Suncion, A.; Newman, D. UCI Machine Learning Repository; University of California, School of Information and Computer Science: Irvine, CA, USA, 2007. [Google Scholar]
  3. Baumgartner, D.; Serpen, G. Large experiment and evaluation tool for WEKA classifiers. In Proceedings of the 2009 International Conference on Data Mining, DMIN 2009, Las Vegas, NV, USA, 13–16 July 2009; pp. 340–346. [Google Scholar]
  4. Baumgartner, D.; Serpen, G. A design heuristic for hybrid ensembles. Intell. Data Anal. 2011, 16, 233–246. [Google Scholar] [CrossRef]
  5. Banfield, R.E.; Hall, L.O.; Bowyer, K.; Kegelmeyer, W. A Comparison of Decision Tree Ensemble Creation Techniques. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 29, 173–180. [Google Scholar] [CrossRef] [PubMed]
  6. Biggio, B.; Fumera, G.; Roli, F. Multiple classifier systems for robust classifier design in adversarial environments. Int. J. Mach. Learn. Cybern. 2010, 1, 27–41. [Google Scholar] [CrossRef]
  7. Bian, S.; Wang, W. On diversity and accuracy of homogeneous and heterogeneous ensembles. Int. J. Hybrid Intell. Syst. 2007, 4, 103–128. [Google Scholar] [CrossRef]
  8. Breiman, L. Bagging predictors. Mach. Learn. 1996, 24, 123–140. [Google Scholar] [CrossRef] [Green Version]
  9. Brown, G.; Wyatt, J.; Harris, R.; Yao, X. Diversity creation methods: A survey and categorisation. Inf. Fusion 2005, 6, 5–20. [Google Scholar] [CrossRef]
  10. Caruana, R.; Niculescu-Mizil, A.; Crew, G.; Ksikes, A. Ensemble selection from libraries of models. In Proceedings of the Twenty-first international conference on Machine learning-ICML ’04, Banff, AB, Canada, 4 July 2004; Association for Computing Machinery (ACM): New York, NY, USA, 2004; p. 18. [Google Scholar]
  11. Caruana, R.; Niculescu-Mizil, A. Data mining in metric space. In Proceedings of the the 2004 ACM SIGKDD International Conference, Seattle, WA, USA, 22–25 August 2004; Association for Computing Machinery (ACM): New York, NY, USA, 2004; pp. 69–78. [Google Scholar]
  12. Canuto, A.M.P.; Da Costa-Abreu, M.; Oliveira, L.D.M.; Xavier, J.C.; Santos, A.D.M. Investigating the influence of the choice of the ensemble members in accuracy and diversity of selection-based and fusion-based methods for ensembles. Pattern Recognit. Lett. 2007, 28, 472–486. [Google Scholar] [CrossRef]
  13. Cawley, G.; Talbot, N. Miscellaneous Matlab Software. Available online: http://theoval.sys.uea.ac.uk/*gcc/matlab/default.html (accessed on 20 March 2020).
  14. Denisova, V. Energy efficiency as a way to ecological safety: Evidence from russia. Int. J. Energy Econ. Policy 2019, 9, 32–37. [Google Scholar] [CrossRef]
  15. Dietterich, T.G. Ensemble Methods in Machine Learning. In Computer Vision; Springer Science and Business Media LLC: Berlin, Germany, 2000; Volume 1857, pp. 1–15. [Google Scholar]
  16. Dunn, O. Multiple comparisons among means. Am. Stat. Assoc. 1961, 56, 52–64. [Google Scholar] [CrossRef]
  17. Džeroski, S.; Ženko, B. Is Combining Classifiers with Stacking Better than Selecting the Best One? Mach. Learn. 2004, 54, 255–273. [Google Scholar] [CrossRef] [Green Version]
  18. Freund, Y.; Schapire, R. Experiments with a new boosting algorithm. In Proceedings of the 13th international conference on machine learning, Bari, Italy, 3–6 June 1996; pp. 148–156. [Google Scholar]
  19. Friedman, M. A Comparison of Alternative Tests of Significance for the Problem of m rankings. Ann. Math. Stat. 1940, 11, 86–92. [Google Scholar] [CrossRef]
  20. Goldberg, D.; Nichols, D.; Oki, B.M.; Terry, D. Using collaborative filtering to weave an information tapestry. Commun. ACM 1992, 35, 61–70. [Google Scholar] [CrossRef]
  21. Gorodetsky, V.; Samoylov, V.; Serebryakov, S. Ontology–based context–dependent personalization technology. In Proceedings of the 2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology, Toronto, ON, Canada, 31 August–3 September 2010; pp. 278–283. [Google Scholar]
  22. Hand, D.J.; Vinciotti, V. Local Versus Global Models for Classification Problems. Am. Stat. 2003, 57, 124–131. [Google Scholar] [CrossRef]
  23. Iman, R.L.; Davenport, J.M. Approximations of the critical region of the fbietkan statistic. Commun. Stat. Theory Methods 1980, 9, 571–595. [Google Scholar] [CrossRef]
  24. Lopatin, E. METHODOLOGICAL APPROACHES TO RESEARCH RESOURCE SAVING INDUSTRIAL ENTERPRISES. Int. J. Energy Econ. Policy 2019, 9, 181–187. [Google Scholar] [CrossRef]
  25. Lopatin, E. Assessment of Russian banking system performance and sustainability. Banks Bank Syst. 2019, 14, 202–211. [Google Scholar] [CrossRef]
  26. Meynkhard, A. Priorities of russian energy policy in russian-chinese relations. Int. J. Energy Econ. Policy 2020, 10, 65–71. [Google Scholar] [CrossRef]
  27. Meynkhard, A. Energy efficient development model for regions of the russian federation: Evidence of crypto mining. Int. J. Energy Econ. Policy 2019, 9, 16–21. [Google Scholar] [CrossRef]
  28. Meynkhard, A. Fair market value of bitcoin: Halving effect. Invest. Manag. Financ. Innov. 2019, 16, 72–85. [Google Scholar] [CrossRef] [Green Version]
  29. Mitchell, T.; Buchanan, B.; DeJong, G.; Dietterich, T.; Rosenbloom, P.; Waibel, A. Machine Learning. Annu. Rev. Comput. Sci. 1990, 4, 417–433. [Google Scholar] [CrossRef]
  30. Thrun, S. Self-Driving Cars-An AI-Robotics Challenge. In Proceedings of the FLAIRS Conference, Key West, FL, USA, 7–9 May 2007; p. 12. [Google Scholar]
  31. Abdi, H. The method of least squares. In Encyclopedia of Measurement and Statistics; SAGE Publishing: Thousand Oaks, CA, USA, 2007. [Google Scholar]
  32. Adomavicius, G.; Mobasher, B.; Ricci, F.; Tuzhilin, A. Context-Aware Recommender Systems. AI Mag. 2011, 32, 67. [Google Scholar] [CrossRef]
  33. Aha, D.; Kibler, D. Instance-based learning algorithms. Mach. Learn. 1991, 6, 37–66. [Google Scholar] [CrossRef] [Green Version]
  34. Ahmed, A.; Kanagal, B.; Pandey, S.; Josifovski, V.; Pueyo, L.G.; Yuan, J. Latent factor models with additive and hierarchically-smoothed user preferences. In Proceedings of the Sixth ACM International Conference, Rome, Italy, 4–8 February 2013; Association for Computing Machinery (ACM): New York, NY, USA; p. 385. [Google Scholar]
  35. Álvarez, S.A.; Ruiz, C.; Kawato, T.; Kogel, W. Neural expert networks for faster combined collaborative and content-based recommendation. J. Comput. Methods Sci. Eng. 2011, 11, 161–172. [Google Scholar] [CrossRef]
  36. An, J.; Dorofeev, M.; Zhu, S. DEVELOPMENT OF ENERGY COOPERATION BETWEEN RUSSIA AND CHINA. Int. J. Energy Econ. Policy 2020, 10, 134–139. [Google Scholar] [CrossRef]
  37. Apte, C. The role of machine learning in business optimization. In Proceedings of the 27th International Conference on Machine Learning (ICML10), Haifa, Israel, 21–24 June 2010; pp. 1–2. [Google Scholar]
Figure 1. Distribution of tasks in MARTS.
Figure 1. Distribution of tasks in MARTS.
Applsci 10 02670 g001
Figure 2. Step-by-step estimates of the transport table.
Figure 2. Step-by-step estimates of the transport table.
Applsci 10 02670 g002
Figure 3. Convergence of the discrepancy of estimates of the transport table.
Figure 3. Convergence of the discrepancy of estimates of the transport table.
Applsci 10 02670 g003
Figure 4. Pseudocode creation scheme.
Figure 4. Pseudocode creation scheme.
Applsci 10 02670 g004
Figure 5. Staking of heterogeneous groups of algorithms.
Figure 5. Staking of heterogeneous groups of algorithms.
Applsci 10 02670 g005
Table 1. Simulated payments.
Table 1. Simulated payments.
Algorithm123
Task 1423
Task 2154
Table 2. Sample predictions of X along with their corresponding threshold-crossing probabilities Pr (X < 8.0) and Pr (X > 40.0) and 90 percent tolerance intervals. The real value of X is given in the last column.
Table 2. Sample predictions of X along with their corresponding threshold-crossing probabilities Pr (X < 8.0) and Pr (X > 40.0) and 90 percent tolerance intervals. The real value of X is given in the last column.
Record NumberXsXPr (X < 8.0)Pr (X > 40.0)90 Percent IntervalXREAL
17.59011.11250.6384470.05.61896.04
28.94234.58040.4215071.241341 × 10−60.82668.50
310.41781.20740.0327795.551115 × 10−168.278410.64
422.36061.34881.415263 × 10−94.545264 × 10−1119.970823.75
522.37894.65410.0035607.687263 × 10−414.132624.77
636.98911.50875.122037 × 10−140.03319734.315936.45
739.74020.93831.519116 × 10−180.39497038.077639.04
839.76401.01776.820442 × 10−180.41170737.960839.83
943.15670.91281.334947 × 10−190.99841041.539441.73
Table 3. Dependence of model precision (CorrelCoef, RMSE), calculation time (Time) and the success rate of 0.9 tolerance interval estimates (Ratio) on the number of records in the training set.
Table 3. Dependence of model precision (CorrelCoef, RMSE), calculation time (Time) and the success rate of 0.9 tolerance interval estimates (Ratio) on the number of records in the training set.
Correl. CoefRMSETimeCorrectIntervalsRatio
0.9993170.736467.3744076200.9764
0.9988380.792965.7757277700.9745
0.9991660.750160.8770779200.9731
0.9988470.784856.3788680700.9772
0.9987610.791951.7796982200.9695
0.9983870.827147.1806583700.9636
0.9985710.805842.8824685200.9678
0.9981350.844239.5839286700.9679
0.9972460.932135.0849688200.9633
0.9980150.855031.5861589700.9604
0.9975390.910428.4873691200.9579
0.9970750.958125.0875992700.9449
0.9971960.943922.0897394200.9525
0.9968170.978518.9907995700.9487
0.9948561.146216.2923797200.9503
0.9892571.373513.7914398700.9263
0.9891701.362611.2926310,0200.9245
0.9749281.93119.6912210,1700.8970
0.9765171.99207.6910310,3200.8821
Table 4. Comparison of any model performance.
Table 4. Comparison of any model performance.
MethodR2RMSETraining Time (MPa)Testing Speed (obs/s)
Decision Tree0.9120.1331.092140,000
Gaussian Process0.9210.125156.8918,200
Random Forest0.9030.1043.7931,300
Quadratic SVM0.8670.188141.9326,000

Share and Cite

MDPI and ACS Style

An, J.; Mikhaylov, A.; Kim, K. Machine Learning Approach in Heterogeneous Group of Algorithms for Transport Safety-Critical System. Appl. Sci. 2020, 10, 2670. https://doi.org/10.3390/app10082670

AMA Style

An J, Mikhaylov A, Kim K. Machine Learning Approach in Heterogeneous Group of Algorithms for Transport Safety-Critical System. Applied Sciences. 2020; 10(8):2670. https://doi.org/10.3390/app10082670

Chicago/Turabian Style

An, Jaehyung, Alexey Mikhaylov, and Keunwoo Kim. 2020. "Machine Learning Approach in Heterogeneous Group of Algorithms for Transport Safety-Critical System" Applied Sciences 10, no. 8: 2670. https://doi.org/10.3390/app10082670

APA Style

An, J., Mikhaylov, A., & Kim, K. (2020). Machine Learning Approach in Heterogeneous Group of Algorithms for Transport Safety-Critical System. Applied Sciences, 10(8), 2670. https://doi.org/10.3390/app10082670

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop