Bonfring International Journal of Networking Technologies and Applications, Vol. 11, Issue 1, February 2024
17
Modified Fuzzy Neural Network Approach for
Academic Performance Prediction of Students in
Early Childhood Education
Marwah Hameed
Abstract--- Modern education relies heavily on educational
technology, which provides students with unique learning
opportunities and enhances their ability to learn. For many
years now, computers and other technological tools have been
an integral part of education. However, compared to other
educational levels, the incorporation of educational technology
in early childhood education is a more recent trend. It is
because of this that materials and procedures tailored to young
children must be created, implemented, and studied. The use of
artificial intelligence techniques in educational technology
resources has resulted in better engagement for students. Early
childhood special education students' academic achievement is
predicted using a Modified Fuzzy Neural Network (MFNN).
Before constructing the classifier, the dataset had to be
preprocessed to remove any extraneous information. As a
follow-up, this study will put to the test an organized approach
to the implementation of customized fuzzy neural networks for
the prediction of academic achievement in early childhood
settings. Considerations for the analysis of academic
achievement in early childhood education are discussed in this
article, including recommendations for the implementation of
proposed modified fuzzy neural networks. In terms of
evaluation metrics such as Precision, recall, accuracy, and the
F1 coefficient, the proposed model outperforms conventional
machine-learning (ML) techniques.
Keywords--- Early Childhood Special Education,
Computer-based Learning System, Artificial Intelligence,
Modified Fuzzy Neural Network.
I.
INTRODUCTION
E
DUCATIONAL technology includes computer-based
learning. Computers have been used in education since the
1950s, and students and teachers can utilize them independently
or in teams (Wolery, M., et al., 2002). However, educational
technology generally combines resources other than computers
to maximize each resource's unique traits and benefits.
Particularly in early childhood education (McConnell, S. R.
(2000)). Besides computers, interactive whiteboards and
programmable toys are commonly employed in early childhood
education. Using instructional technology has several benefits.
Educational technology may motivate pupils to learn by
attracting their attention and encouraging innovative actions
(Lifter, K., et al., (2011)). The utilization of technology allows
for unique instructional characteristics such as multimedia-
based engagement and problem-solving process visualization.
Technology also fosters collaborative learning and
constructivism (Odom, S. L., & Wolery, M. (2003)). Teaching
pupils about educational technology helps them understand the
Information Society. Finally, technology may help schools
connect with their communities.
Artificial Intelligence (AI) is used in many fields.
Educational technology is an interesting topic for AI (Warren,
S.F. (2000)). Since the 1970s, Artificial Intelligence has been
used in instructional technologies. E-learning is a broad phrase
[6]. It is the use of instructional technology to meet specific
educational needs. The emphasis on new resources in
educational technology often ignores older but still important
tools (Warren, S.F., & Walker, D. (2005)). The major goal is to
assist students and teachers over traditional techniques. Using
instructional technology in the classroom might be difficult.
The integration process should address concerns specific to a
student group (Schwartz, I. S. (2000)). Technology can help
solve specific educational issues or offer the infrastructure for
activities that would not be possible without it. Creating a good
prediction model improves the forecast range. Predicting
academic performance in early childhood special education
using MFNN.
Section 2 describes the recommended technique for the rest
of the research. Section 3 presents the findings. Section 4
concludes and plans future work.
II. PROPOSED METHODOLOGY
For predicting academic success in early childhood special
education, an MFNN is presented. Primarily, preprocessing
eliminates unnecessary data from the dataset, increasing the
classifier's prediction performance. Another goal of this
research is to apply modified fuzzy neural networks to predict
academic achievement in early childhood education.
Considerations for analyzing academic success in early
childhood education are discussed in this article. Figure 1
depicts the suggested methodology's overall procedure.
Marwah Hameed, Department of Computer Science, College of Computer Science and Information Technology, University of Kirkuk, Kirkuk, Iraq.
DOI: 10.9756/BIJNTA/V11I1/BIJ24007
ISSN 2320 - 5377 | © 2024 Bonfring
Bonfring International Journal of Networking Technologies and Applications, Vol. 11, Issue 1, February 2024
Input Data Base (ICFES)
Data Preprocessing - Z-score normalization
Classification by Modified Fuzzy Neural
Network (MFNN)
Performance Measures
Figure 1: The Overall Process of the Proposed Methodology
1. Data Preprocessing Using Z-Score Normalization
Each experiment's basic intensity data were normalized by
computing the average intensity for each dataset, then the
average of the averages [9]. This grand average was used to
compute normalization factors for each experiment. The grand
average was then equaled by all normalized data. A z-normal
score's distribution curve. There is a -3 standard deviation
(far left of the normal distribution curve) to a +3 standard
deviation range (fall to the far right of the normal distribution
curve). In order to use a z-score, you need to know the mean μ
and also the population standard deviation σ.
Esp, let xi (i = 1, 2, · · ·, D) represents the i-th component
of each feature vector x ∈ R D. The mean and the standard
deviation of these D components are evaluated as:
𝜇𝑥 =
1
1
∑𝐷 𝑥 , 𝜎
𝐷 𝑖=1 𝑖 𝑥
=
= 𝑍𝑁(𝑥) =
𝑥−𝜇𝑥 1
√ ∑𝐷
(𝑥
𝐷 𝑖=1 𝑖
2
− 𝜇𝑥 )
(1)
Z-score normalization is then applied as,
𝑥
(𝑧𝑛)
𝜎𝑥
18
calculates the net value using an LR type fuzzy number and so
does not presume criteria independence. The FBP algorithm
also avoids oscillations and falls into local minima. The
convergence of the FBP algorithm for single-output networks
with single and multiple training patterns is proven.
3. Fuzzy Backpropagation Algorithm (FBP Algorithm)
Different neuro-fuzzy approaches have recently been
presented for calculating the net value of the ith neuron's inputs.
The mapping is mathematically represented by Sugeno's fuzzy
integral, which is based on a psychological foundation.
Step 1: Randomly create the initial weight sets w for the
input hidden layer in which each 𝑤𝑗𝑖 = (𝑤𝑚𝑗𝑖 , 𝑤𝛼𝑗𝑖 , 𝑤𝛽𝑗𝑖 ) is an
LR-type fuzzy number. And create the weight set w’ for the
hidden output layer
′
′
′
′
Here 𝑤𝑘𝑗
= (𝑤𝑚𝑘𝑗
, 𝑤𝛼𝑘𝑗
, 𝑤𝛽𝑘𝑗
)
𝑤𝑗𝑖 = (𝑤𝑚𝑗𝑖 , 𝑤𝛼𝑗𝑖 , 𝑤𝛽𝑗𝑖 )
′
′
′
′
𝑤𝑘𝑗
= (𝑤𝑚𝑘𝑗
, 𝑤𝛼𝑘𝑗
, 𝑤𝛽𝑘𝑗
)
Step 2: Consider (𝐼𝑝 , 𝐷𝑝 ) 𝑝 = 1, 20 … 𝑁 input-output
pattern set. In which 𝐼𝑝 = (𝐼𝑝0 , 𝐼𝑝1 , 𝐼𝑝1 ) also every 𝐼𝑝𝑖 is an
LR-type fuzzy number.
Step 3: Allocate values for α and η; Alpha=0.1 Neta =0.9
Step 4: Acquire next pattern set (𝐼𝑝 , 𝐷𝑝 ) Assign (𝑂𝑝𝑖 =
𝐼𝑝𝑖 , i=1,2,3..1
Step 5: Calculate the input to hidden neurons
′
′
𝑂𝑝𝑗
= 𝑓(𝑁𝐸𝑇𝑝𝑗 ), 𝑗 = 1,2 … . , 𝑚; 𝑂𝑝0
=1
Where 𝑁𝐸𝑇𝑝𝑗 = 𝐶𝐸 (∑ 𝑊𝑗𝑖 𝑂𝑝𝑖 )
Step 6: Evaluate the hidden to output neurons
𝑂’’𝑝𝑘 = 𝑓 (𝑁𝐸𝑇’𝑝𝑘 ), 𝑘 = 1,2, . . 𝑛;
(2)
Based on these calculations, z-score normalization extends
the original feature vectors along the 1 vector to a hyperplane
including the origin and being perpendicular to √ 1. These
vectors are then adjusted to have a similar length as D, resulting
in final normalized vectors that lie on a hypersphere of
radius √D. Next the preprocessing of the given data, the feature
selection procedure is carried out, as explained in the following
section.
2. Classification Using MFNN
Neuronal networks and fuzzy logic are emerging
technologies that could be used in pharmaceutical formulation
and processing (Yang, B., et al., (2007)). ANNs and
evolutionary algorithms work well together to forecast and
optimize formulation conditions. Fuzzy-neural systems seem to
have flourished more than other methods of symbolic
connectionism. A fuzzy neural network has three layers: an
input layer (fuzzification), a hidden layer (fuzzy rules), and an
output layer (fuzzification) (defuzzification). Sometimes a
five-layer network containing sets in the second and fourth
layers can be found. In practice, the criteria are connected. The
linear evaluation function cannot capture inter-criteria
relationships. To solve the SBP algorithm's disadvantage. This
paper proposes a Fuzzy Backpropagation (FBP) technique. It
Where 𝑁𝐸𝑇’𝑝𝑘 = 𝐶𝐸 (∑ 𝑊𝑗𝑖 𝑂’𝑝𝑗 )
Step 7: Evaluate modification of weights ∆ w’(t) for the
hidden output layer as below
Evaluate
∆𝐸𝑝 (𝑡) = (𝜕𝐸𝑝 /𝜕𝑤’𝑚𝑘𝑗 , 𝜕𝐸𝑝 /𝜕𝑤𝛼𝑘𝑗, 𝜕𝐸𝑝/𝜕𝑤’𝛽𝑘𝑗 )
Evaluate
∆𝑤’(𝑡) = −𝜂∆𝐸𝑝 (𝑡) + 𝛼∆𝑤’(𝑡 − 1)
The modified weight i of hidden to output neuron is
𝑊’(𝑡) = 𝑊’(𝑡 − 1) + ∆𝑊’(𝑡)
Step 8: Calculate modification of the weights ∆ w’(t) for
the input hidden layer as follows
Let
𝛿𝑝𝑚𝑘 = −(𝐷𝑝𝑘 − 𝑂’’𝑝𝑘 )𝑂’’𝑝𝑘 (1 − 𝑂’’𝑝𝑘 ).1
1
𝛿𝑝𝑚𝑘 = −(𝐷𝑝𝑘 − 𝑂’’𝑝𝑘 )𝑂’’𝑝𝑘 (1 − 𝑂’’𝑝𝑘 ). (− )
3
1
𝛿𝑝𝑚𝑘 = −(𝐷𝑝𝑘 − 𝑂’’𝑝𝑘 )𝑂’’𝑝𝑘 (1 − 𝑂’’𝑝𝑘 ). ( )
3
ISSN 2320 - 5377 | © 2024 Bonfring
Bonfring International Journal of Networking Technologies and Applications, Vol. 11, Issue 1, February 2024
Evaluate
∆𝐸𝑝 (𝑡) = (𝜕𝐸𝑝 /𝜕𝑤’𝑚𝑗𝑖 , 𝜕𝐸𝑝 /𝜕𝑤𝛼𝑗𝑖 , 𝜕𝐸𝑝/𝜕𝑤’𝛽𝑗𝑖 )
Table 1: Performance results of the proposed and existing
prediction methods
Calculate
∆𝑤’(𝑡) = −𝜂∆𝐸𝑝 (𝑡) + 𝛼∆𝑤’(𝑡 − 1)
Step 9: Modify weight for the input-hidden-output layer as,
𝑊(𝑡) = 𝑊(𝑡 − 1) + ∆𝑊(𝑡)
𝑊’(𝑡) = 𝑊’(𝑡 − 1) + ∆𝑊’(𝑡)
19
Metrics
SVM
ANN
FNN
Accuracy
91.24
93.58
99.57
Precision
71.45
84.67
91.58
Recall
78.24
86.57
92.51
F-measure
87.24
92.57
98.24
Table 1. tabulate the performance results of the proposed
and existing prediction methods.
Step 10: 𝑝 = 𝑝 + 1;
if (p<=N) go to step 5
Step11: output w’ and w’’ the final weight sets.
cos ((𝜋⁄𝐺𝑚𝑎𝑥 )×𝑇)+2.5
4
cos ((𝜋×𝑇 ⁄𝐺𝑚𝑎𝑥 ))×2.5
4
100
78.24
86.57
92.51
71.45
84.67
91.58
0
ANN
MFNN
METHODS
(3)
In which T is the number of iterations. Assume G max= 40,
the changing curve of value K arrived. Formula (3) is described
below:
𝑣𝑖𝑑 = (
Recall
200
SVM
) × [𝑣𝑖𝑑 + 2 × 𝑟𝑎𝑛𝑑() × (𝑝𝑖𝑑 −
𝑥𝑖𝑑 ) + 2 × 𝑅𝑎𝑛𝑑() × (𝑝𝑔𝑑 − 𝑥𝑖𝑑 )] (4)
Here 𝑉𝑖𝑑 is the regularity distribution factor, and the
decreasing 𝑉𝑖𝑑 value is dispersed in combination with the rand
function. The modified number of leaders per iteration is 𝑉𝑖𝑑 ·N
and the number of followers is equal to 1 − 𝑉𝑖𝑑 ·N.
III. RESULTS AND DISCUSSION
The ICFES collected the information for this research.
Approximately 200,000 Colombian university students took the
SABER PRO test in 2016. These included data on each
student's SABER 11 test results, socioeconomic status,
childhood school characteristics, and academic status. The
original data set included student gender, age, and academic
program. The pupils' identities were kept secret because they
were coded in the ICFES data collection. TP, FP, TN, and FN
rates are used to determine various performance measures. The
first performance metric was precision or the fraction of
relevant retrieved occurrences. Remember that recall is defined
as the proportion of relevant instances retrieved. The
measurements of accuracy and recall are both significant in
evaluating a prediction approach's success. So these two
metrics can be merged with equal weights to get the F-measure.
Accuracy is the proportion of accurately predicted instances to
all expected instances.
Figure 2: Precision and recall results between the proposed
and existing methods
Figure 2. shows the proposed MFNN technique gives high
value of Precision and recall than the existing classifier. From
the results it is identified that the proposed algorithm is highly
effective. So the performance of the proposed model will be
higher compared to other classifier built on previously
generated model.
F-measure
Prediction
Percentage (%)
𝐾=
Prediction
Percentage(%)
4. Regularity Distribution Factor
To reduce the likelihood of failure in iterations, the
regularity distribution factor should select a convex function in
the early iterations, allowing the population to find an optimal
solution over a large range. In the late phase, a concave function
should be chosen so that the regularity distribution factor can
gradually change to the minimum for local development to
occur. It ensures the algorithm's convergence. The functional
regularity distribution factor structuring on the basis of the
cosine function is demonstrated in formula (3):
precision
Accuracy
100%
91.24
93.58
99.57
87.24
92.57
98.24
50%
0%
SVM
ANN
MFNN
METHODS
Figure 3: Accuracy results between the proposed and existing
methods
Figure.3. show the relationship between the experimental
and the MFNN based learning predicted results on the SVM
and ANN-based methods. The result indicates that the proposed
MFNN based learning can greatly improve the accuracy
prediction among the different methods.
ISSN 2320 - 5377 | © 2024 Bonfring
Bonfring International Journal of Networking Technologies and Applications, Vol. 11, Issue 1, February 2024
IV. CONCLUSION
This study examines the use of Artificial Intelligence in
early childhood education. For predicting academic success in
early childhood special education, an MFNN is suggested. It
was designed to classify students' academic achievement using
numerous MFNNs. This conclusion may be explained by the
fact that different academic programs attract students with
diverse abilities and interests. The substance of each academic
program may also have influenced student preparation and
performance. Thus, the predictive efficacy of selected academic
performance predictors may vary by discipline. The suggested
model outperforms the existing techniques in terms of
prediction accuracy. So more topologies with different learning
paradigms should be investigated. Furthermore, determining
MFNN confidence and prediction intervals requires more
research.
REFERENCES
M. Wolery and D.B. Bailey Jr, “Early childhood special education
research”, Journal of early intervention, Vol. 25, No. 2, Pp. 88-99, 2002.
[2] S.R. McConnell, “Assessment in early intervention and early childhood
special education: Building on the past to project into our future”, Topics
in Early Childhood Special Education, Vol. 20, No. 1, Pp. 43-48, 2000.
[3] K. Lifter, S. Foster-Sanda, C. Arzamarski, J. Briesch and E. McClure,
“Overview of play: Its uses and importance in early intervention/early
childhood special education”, Infants & Young Children, Vol. 24, No. 3,
Pp. 225-245, 2011.
[4] S.L. Odom and M. Wolery, “A unified theory of practice in early
intervention/early childhood special education: Evidence-based
practices”, The Journal of Special Education, Vol. 37, No. 3,
Pp. 164-173, 2003.
[5] S.F. Warren, “The future of early communication and language
intervention”, Topics in early childhood special education, Vol. 20,
No. 1, Pp. 33-37, 2000.
[6] J.J. Carta, “An early childhood special education research agenda in a
culture of accountability for results”, Journal of Early Intervention,
Vol. 25, No. 2, Pp. 102-104, 2002.
[7] S.F. Warren and D. Walker, “Fostering early communication and
language development”, Handbook of research methods in developmental
science, 249-270, 2005.
[8] I.S. Schwartz, “Standing on the shoulders of giants: Looking ahead to
facilitating membership and relationships for children with disabilities”,
Topics in early childhood special education, Vol. 20, No. 2, Pp. 123-128,
2000.
[9] C. Cheadle, M.P. Vawter, W.J. Freed and K.G. Becker, “Analysis of
microarray data using Z score transformation”, The Journal of molecular
diagnostics, Vol. 5, No. 2, Pp. 73-81, 2003.
[10] B. Yang, L. Yao and H.Z. Huang, “Early software quality prediction
based on a fuzzy neural network model”, In Third International
Conference on Natural Computation (ICNC 2007), Vol. 1, Pp. 760-764,
2007.
[1]
ISSN 2320 - 5377 | © 2024 Bonfring
20