Nothing Special   »   [go: up one dir, main page]

ICCCI 2021 Paper 204

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

Arabic Sentiment Analysis using BERT model

Hasna Chouikhi1[0000−0000−0000−0000] , Hamza Chniter2[0000−0000−0000−0000] ,


and Fethi Jarray1,2[0000−0002−5110−1173]
1
LIMTIC Laboratory, UTM University, Tunisia hasna.chouikhi@fst.utm.tn
2
Higher institute of computer science of Medenine, Tunisia
chniterhamza07@gmail.com, fethi.jarray@isim.rnu.tn

Abstract. Sentiment analysis is the process of determining whether a


text or a writing is positive, negative, or neutral. A lot of research has
been done to improve the accuracy of sentiment analysis methods, vary-
ing from simple linear models to more complex deep neural network
models. Lately, the transformer-based model showed great success in
sentiment analysis and was considered as the state-of-the-art model for
various languages (English, german, french, Turk, Arabic, etc.). However,
the accuracy for Arabic sentiment analysis still needs improvements es-
pecially in tokenization level during data processing. In fact, the Arabic
language imposes many challenges, due to its complex structure, var-
ious dialects, and resource scarcity. The improvement of the proposed
approach consists of integrating an Arabic BERT tokenizer instead of
a basic BERT Tokenizer. Various tests were carried out with different
instances (dialect and standard). We used hyperparameters optimiza-
tion by random search method to obtain the best result with different
datasets. The experimental study proves the efficiency of the proposed
approach in terms of classification quality and accuracy compared to
Arabic BERT and AraBERT models.

Keywords: Arabic sentiment analysis· BERT model· Arabic BERT model·


Arabic language · Tokenization.

1 Introduction
Sentiment Analysis (SA) is a Natural Language Processing (NLP) research field
that spotlights on looking over people’s opinions, sentiments,and emotions. SA
techniques are categorized into symbolic and sub-symbolic approaches. The for-
mer use lexica and ontologies [1] to encode the associated polarity with words
and multiword expressions. The latter consist of supervised, semi-supervised and
unsupervised machine learning techniques that perform sentiment classification
based on word cooccurrence frequencies. Among all these techniques, the most
popular are based on deep neural networks. Some hybrid frameworks leverage
both symbolic and sub-symbolic approaches.
SA is based on a multi-step process including data retrieval, data extraction,
data pre-processing, and feature extraction. The ultimate subtasks of sentiment
classification allow three types of classification: polarity classification, intensity
2 H. Chouikhi et al.

classification, and emotion identification. The first type classifies the text as pos-
itive, negative or neutral, while the second type identifies the polarity degree as
very positive, positive, negative or very negative. The third classification identi-
fies the emotion such as sad, anger or happy.
Pratically, arabic language has a complex nature, due to its ambiguity and rich
morphological system. This nature associated to various dialects and the lack
of resources represent a challenge for the progress of arabic sentiment analysis
research.
In this paper, we adress the tokenization challenges of sentiment analysis for ara-
bic language. We also tackle arabic SA by taking into account the improvement
of tokenization level. The rest of this paper is organized as follows: In Section 2,
we present specificities of arabic sentiment analysis. Section 3 overviews existing
works related to ASA. Our proposed method is described in section 4. Section
5 is reserved for the presentation of the results and experiments. Finaly, we end
with a conclusion.

2 Specificities of Arabic Sentiment Analysis

Many researches in literature have proven that sentiment analysis is not a simple
classification problem. SA is a suitcase research problem that requires tackling
different NLP tasks including subjectivity detection, aspect extraction, word po-
larity disambiguation, and time expression recognition.
Besides the general challenges of sentiment analysis such as domain dependency,
polarity fuzziness and spam [2], there are others related to arabic SA. As senti-
ment analysis depends significantly on the morphology of the target language,
Abdul-Mageed et al. [3] listed the linguistic properties of the arabic language in
terms of varieties, orthography, and morphology.
As language varieties, arabic is one of the six official languages of the united
nations, and the mother tongue of about 300 million people in 22 different coun-
tries, including standard arabic and dialects. Modern standard arabic (MSA)
is the formal language of communication understood by the majority of arabic
speaking people, as it is commonly used in radio, newspapers, and television.
The arabic language is known by its morphological complexity and richness. The
same word may carry important information using suffixes, affixes and prefixes
[4]. An arabic word reveals several morphological aspects including derivation,
inflection, and agglutination.
A significant factor of an accurate sentiment analysis system is the use of large
annotated corpora. The accuracy increases with the quality and the size of the
training corpus of the sentiment classifier. Arabic language is still poor in terms
of tests corpora which represents a well known problem for sentiment analysis.
In addition, the few available datasets are dialectically limited, or even free from
dialectical content. To the best of our knowledge, there are no arabic corpora
annotated for sentiment analysis and fully covering the different dialects.
MSA lexica are small compared to english lexica. Accordingly, many works try
to translate large english lexica to arabic. However, the resulted coverage is poor
Arabic Sentiment Analysis using BERT model 3

regarding the morphological complexity of arabic.


While people in social media express their opinions using their local dialects,
the majority of NLP tools are designed to parse MSA [5]. Dealing with dialects
makes the task more complicated because there are no rules, no standard for-
mats either.
In this paper, we will focus on overcoming the challenges relating to the nature
of arabic language especially in the tokenization level.

3 Related Work for ASA


The approaches of ASA can be classified into two categories : classical machine
learning approaches, and deep learning approaches.

3.1 Classical Machine Learning approaches


Machine learning (ML) methods have broadly used for sentiment analysis. ML
addresses sentiment analysis as a text classification problem. Many approaches
include support vector machine (SVM), maximum entropy (ME), naı̈ve Bayes
(NB) algorithm, and artificial neural networks (ANNs). NB and SVM are the
most commonly exploited machine learning algorithms for solving the sentiment
classification problem [6].
Al-Rubaiee et al. [8] performed sentiment classification by two forms : polar-
ity classification, and rating classification. They applied machine learning using
SVM, MNB, and BNB. Sentiment polarity classification achieved (90% accu-
racy), but for rating classification, there was a lot to do for improvement in
rating classification (50% accuracy).

3.2 Deep Learning approaches


Deep learning (DL) is widely used for sentiment analysis. Socher et al. [9] pro-
posed an RNN (Recurrent neural network) based approach , which is trained
on a constructed sentiment treebank and improved the sentence-level sentiment
analysis on english datasets.
Using DL is less abandon in arabic SA than in english SA, Bilal Ghanem [10]
used CNN model for SA tasks and a stanford segmenter to perform tweets to-
kenization and normalization. They used Word2vec for word embedding with
ASTD datasets.
Sarah Alhumoud [12] used a LSTM-CNN model with only two unbalanced classes
(Positive and negative) among four classes (objective, subjective positive, sub-
jective negative, and subjective mixed) form ASTD.
Ali Safaya [13] utilized a pre-trained BERT model with Convolutional Neural
Networks and they present an ArabicBERT a set of pre-trained transformer lan-
guage models for arabic language. They used a base version of arabic BERT
model (bert-base-arabic).
ElJundi et al. [14] developed an arabic specific universal language models (ULM),
4 H. Chouikhi et al.

hULMonA. They fine tuning multi-lingual BERT (mBERT) ULM for ASA. They
collected a benchmark dataset for ULM evaluation with sentiment analysis.
Antoun et al. [15] developed an arabic language representation model to im-
prove the state-of-the-art in several Arabic NLU tasks. They created AraBERT
based on the BERT model. They used the BERT base configuration that has
12 encoder blocks, 768 hidden dimensions, 12 attention heads, 512 maximum
sequence length.
Despite it is one of the main steps in any languages processing step, only few
recent studies attempted to evaluate word embedding of arabic texts. Mohamed
A. Zahran [16] translated the word2vec english benchmark and used it to evalu-
ate the different embedding techniques on a large arabic corpus. However, they
reported that translating an english benchmark is not a good strategy to evalu-
ate arabic embedding.
In this paper, we used an arabic version of the BERT model: Arabic BERT [13]
that is trained from scratch and made publicly available for use. Arabic BERT
was a set of BERT language models that consists of four models of different sizes
trained using masked language modeling with whole word masking. Models with
large, base, medium, and mini sizes [13] were trained with the same data for 4M
steps ( Table 1).

Table 1: Arabic BERT Models.


arabic BERT-Mini arabic BERT-Medium arabic BERT-Base arabic BERT-Large
Hidden layers 4 8 12 24
Attention heads 4 8 12 16
Hidden size 256 512 768 1024
Parameters 11M 42M 110M 340M

3.3 BERT Embedding

More recent word embedding techniques, such as FastText, Embedding from


Language Models (ELMo) and BERT are yet to be fully explored for ASA de-
spite having pre-trained arabic versions publicly available, such as FastText for
157 languages[17] and 14 Pretrained ELMo Representations for Many Languages
(ELMoForManyLangs). In this work, we are interested in integrating a new word
embedding techniques BERT.
A recent work Jacob Devlin and Toutanova [18] on language representation mod-
els introduced BERT (Bidirectional Encoder Representations from Transform-
ers). BERT is pre-trained by conditioning on both left and right context in all
layers, unlike previous language representation models. Applying BERT to any
NLP task needs only to fine-tune one additional output layer to the downstream
task. This makes it different from the previous word, which is applied to the task
of SA as features. As this type of language representation model being new, the
Arabic Sentiment Analysis using BERT model 5

aim is to evaluate its performance on the task of arabic SA.

Fig. 1: BERT model architecture.

As opposed to directional models, which read the text input sequentially


left-to-right or right-to-left, the transformer encoder reads the entire sequence of
words at once. Therefore, it is considered bidirectional, though it would be more
accurate to say that it’s non-directional. This characteristic allows the model to
learn the context of a word based on all of its surroundings (left and right of the
word) Fig 1.

3.4 Arabic tokenizer


Tokenization in arabic language presents a challenge because of its rich and com-
plex morphology. A token is usually defined as a sequence of one or more letters
preceded and followed by space. This definition works well for non-agglutinative
languages like english.
Arabic tokenization has been described in various researches and implemented
in many solutions as it is a required preliminary stage for further processing.
According to [19] there are different levels at which an arabic tokenizer can be
developed, depending on the depth of the linguistic analysis involved. They pre-
sented 3 models for tokenization: (1)Tokenization combined with morphological
analysis.(2)Tokenization guesser.(3) Tokenization dependent on the morpholog-
ical analyser.
Abdelali et al. [20] introduced an arabic tokenizer Farasa, which uses SVM for
ranking using linear kernels that uses a variety of features and lexicons to rank
possible segmentation of a word. They measure the performance of the tokenizer
in terms of accuracy and efficiency, in two NLP tasks, namely Machine Transla-
tion (MT) and Information Retrieval (IR).

BERT tokenizer [18] was trained using the WordPiece tokenization. It means
that a word can be broken down into more than one sub-words. The vector
6 H. Chouikhi et al.

Fig. 2: Tokenization using BERTTokenizer method

BERT assigned to a word is a function of the entire sentence, therefore, a word


can have different vectors based on the contexts.There are different built in tok-
enizer. The basic is character tokenizer (Fig 2). However, the pretrained arabic
BERT uses a word by word tokenizer (Fig3).

Fig. 3: Tokenization using arabic-BERT tokenizer method

The choice of this tokenizer is verified by a test on ASTD dataset where


we obtained an accuracy of 81% with BERTTokenizer and 91% with pretrained
arabic BERT.

4 Proposed method

Among all cited works, the approach of Ali Safaya [13] is the most close to
our approach. Figure 4 depicts the proposed architecture for arabic SA. Our
architecture is composed by 3 blocks. The first block describes the text pre-
processing step where we used an arabic BERT tokenizer to split the word into
tokens. Second block is the training model. Arabic BERT model is used with
only 8 encoder (Medium case [13]). The output of last four hidden layers is
concatenated to get a size representation vector 512x4x128 with 16 batch size
Arabic Sentiment Analysis using BERT model 7

(32 for AJGT dataset). The pooling operation’s output is concatenated and
flattened to be later on crossed a dense layer and a Softmax function to get the
final label. Third block is about the classifier where we used a dropout layer for
some regularization and a fully-connected layer for our output. The choice of
maximum token length is validated by a test with the AJGT dataset ( see Fig
5).

Fig. 4: Arabic BERT model architecture.

96.11
96
Table 2: Hyper-parameters used in the
95.55
approach.
Accuracy (%)

95.5

95
Hyper-parameters Value
Batch-size 16 (32 for AJGT)
94.5 94.44 94.44 dropout 0.1
Max length 128
32 64 128 256 Hidden size 512
Maximum token length
lr 2e-5
AJGT
Optimizer AdamW
Epochs 10/20/50
Fig. 5: Optimal value of maximum token
length

Table 2 displays the hyperparameters of the proposed model. The number of


epochs varies according to the datasets and the memory reserved for the execu-
tion of the model. It can be either 10 or 20 or 50. The over all model is trained by
AdamW optimizer. We note that with hyperparameters optimization by random
search, we outperform the approach of [13].
Table 3 explain the differences between our model, Arabic BERT [13] and AraBERT
[15] ones. It shows that with an arabic tokenizer the number of encoders in the
8 H. Chouikhi et al.

arabic BERT model influences the accuracy value.

Table 3: Differences between the proposed approach, AraBERT [15] and Arabic
BERT [13] models.
Batch-size Epochs Layers Activation function
Our approach 16/32 10/20/50 8 Softmax
Arabic BERT[13] 16/32 10 12 ReLU
AraBERT[15] 512/128 27 12 Softmax

5 Experiments and results


In this paper, we used five datasets in order to train the classifier, valid and test
the system. All were split into three subsets: 80% for training, 10% validation,
and 10% for testing.
– ASTD: The Arabic Sentiment Twitter Dataset [23] has around 10 K arabic
tweets from different dialects. Tweets were annotated as positive, negative,
neutral, and mixed.
– HARD: The Hotel Arabic Reviews Dataset [24] contains 93,700 reviews.
Each one has two parts: positive comments and negative comments. It covers
1858 hotels contributed by 30889 users (68%positive, 13% negative, and 19%
neutral).
– LABR: The Large-scale Arabic Book Reviews [25] contains over 63,000 book
reviews in arabic.
– AJGT: The Arabic Jordanian General Tweets [26] contains 1,800 tweets
annotated as positive and negative.
– ArSenTD-Lev: The Arabic Sentiment Twitter Dataset for LEVantine [27]
contains 4,000 tweets written in Levantine dialect with annotations for sen-
timent, topic and sentiment target. We will use 3 among 5 classes.

Table 4: Arabic language used in datasets.


Datasets Language Samples Classes Categories
ASTD MSA 10,000 4 opinion
LABR DA 63,000 2 opinion
HARD MSA-DA 93,700 2 opinion
AJGT MSA-DA 1,800 2 opinion
ArSenTD-Lev DA 4,000 3 opinion

We compare the proposed approach with two shapes of methods: classical-


based and deep learning-based. The nature of the used approaches verifies this
Arabic Sentiment Analysis using BERT model 9

Table 5: Comparison between classical and deep learning approaches.


Approaches ASTD LABR AJGT HARD ArsenTD-Lev
CNN [10] 79% - - - -
LSTM[11] 81% 71% - - -
LSTM-CNN[12] 81% - - - -
CNN-CROW[28] 72,14% - - - -
DE-CNN-G1[29] 82,48% - 93,06% - -
LR [30] 87,10% 84,97% - - -
GNB[30] 86% 85% - - -
SVM[25] - 50% - - -
Arabic-BERT Base [13] 71,4% - - - 55,2%
hULMonA[14] 69,9% - - 95,7% 52,4%
AraBERT [15] 92,6% 86,7% 93,8% 96,2% 59,4%
Our approach 91% 87% 96,11% 95% 75%

87 86 93 91 100
Accuracy (%)

79 81 81 72 82
70 71

50

0
C ON
N W

ch
C STMLS NN
D N/ /C M

1
G R
A hU NB

pr RT
O A icB onA
ap BERT
-G
E- CR N

oa
T

ur ra E
C

ra LM
b
N
L

ASTD

Fig. 6: Comparison between classical and deep learning approaches with ASTD
datasets

gap between the values of accuracy compared to our model.

The result of table 5, is graphically visualized in Fig 7 and 6, indicates the


variation of the accuracy value according to the used method and datasets. It
shows that we have a competition between our model and Antoun et al.[15] one.
Our model gives the best result for LABR, AJGT and ArsenTD-Lev datasets;
while Antoun et al. [15] works give the best result with ASTD and HARD
datasets. We found a slight difference in the accuracy value between the two
works ( 92,6% compared to 91% for ASTD dataset and 86,7% compared to 87%
for LABR datasets). However, our model gives a very good result with ArsenTD-
Lev dataset (75% compared to an accuracy value that does not exceed 60% with
others models).
10 H. Chouikhi et al.

Fig. 7: Comparison between approaches with the rest of datasets.

6 Conclusion
This paper proposes a BERT based approach to sentiment analysis in arabic.
This study clearly demonstrated that Arabic Sentiment Analysis (ASA) has be-
come one of the research areas that have been drawn the attention of many
researchers.
Numerical results show that our approach outperform the existing ASA ap-
proach. Many challenges need to be sorted out so as to design an effective and
mature sentiment analysis system. Most of these challenges are inherited from
the nature of the arabic language itself. As future works, we will try to overcome
these challenges.

References
1. Dragoni, Mauro and Poria, Soujanya and Cambria, Erik.(2018) ” OntoSenticNet: A
commonsense ontology for sentiment analysis”. IEEE Intelligent Systems33(3):77-
85.
2. Oumaima Oueslati , Erik Cambria , Moez Ben HajHmida , Habib Ounelli.(2020)
”A review of sentiment analysis research in Arabic language”.Future Generation
Computer Systems.112.408–430.
3. Abdul-Mageed, Muhammad and Diab, Mona and Korayem, Mohammed.(2011)
”Subjectivity and sentiment analysis of modern standard Arabic”. Proceedings of
the 49th Annual Meeting of the Association for Computational Linguistics Human
Language Technologies short papers-Volume 2 Association for Computational Lin-
guistics. 587–591.
4. Amira Shoukry and Ahmed Rafea.(2012)”Sentence-level arabic sentiment analy-
sis”.Collaboration Technologies and Systems (CTS) 2012 International Conference
on IEEE.546–550.
5. Wajdi Zaghouani.(2017)”Critical survey of the freely available Arabic cor-
pora”.https://arxiv.org/abs/1702.07835.
6. Imran, Azhar and Faiyaz, Muhammad and Akhtar, Faheem. (2018)”An enhanced
approach for quantitative prediction of personality in facebook posts”.International
Journal of Education and Management Engineering (IJEME). 8(2):8-19.
Arabic Sentiment Analysis using BERT model 11

7. Alsayat, Ahmed and Elmitwally, Nouh.(2020)”A comprehensive study for Ara-


bic Sentiment Analysis (Challenges and Applications)”.Egyptian Informatics
Journal.Elsevier.21(1):7-12.
8. Al-Rubaiee, Hamed and Qiu, Renxi and Li, Dayou.(2016) ”Identifying Mubasher
software products through sentiment analysis of Arabic tweets”. 2016 International
Conference on Industrial Informatics and Computer Systems (CIICS).IEEE. 1-6.
9. Socher, Richard and Perelygin, Alex and Wu, Jean and Chuang, Jason and Manning,
Christopher D and Ng, Andrew Y and Potts, Christopher. (2013)”Recursive deep
models for semantic compositionality over a sentiment treebank”.Proceedings of the
2013 conference on empirical methods in natural language processing.1631–1642.
10. Bilal Ghanem, Jihen Karoui, Farah Benamara, Véronique Moriceau and Paolo
Rosso.(2019)”IDAT at FIRE2019: Overview of the track on irony detection in Arabic
tweets”. Proceedings of the 11th Forum for Information Retrieval Evaluation.10–13.
11. Amira Shoukry and Ahmed Rafea.(2012)”Sentence-level arabic sentiment analy-
sis”. Collaboration Technologies and Systems (CTS), 2012 International Conference
on IEEE .546–550.
12. Sarah Alhumoud, Tarfa Albuhairi and Wejdan Alohaideb. (2015)”Hybrid senti-
ment analyser for arabic tweets using R”.2015 7th International Joint Conference on
Knowledge Discovery, Knowledge Engineering and Knowledge Management.IC3K,
IEEE.417–424.
13. Ali Safaya, Moutasem Abdullatifand Deniz Yuret. (2020)”KUISAIL at SemEval-
2020 Task 12: BERT-CNN for Offensive Speech Identification in Social Me-
dia”.arXiv:2007.13184v1 [cs.CL].
14. ElJundi, Obeida and Antoun, Wissam and El Droubi, Nour and Hajj, Hazem
and El-Hajj, Wassim and Shaban, Khaled.(2019)”hulmona: The universal language
model in arabic”.Proceedings of the Fourth Arabic Natural Language Processing
Workshop.68-77.
15. Antoun, Wissam and Baly, Fady and Hajj, Hazem.(2020) ”AraBERT: Transformer-
based model for Arabic language understanding”.arXiv preprint arXiv:2003.00104.
16. Mohamed A. Zahran, Ahmed Magooda, Ashraf Y. Mahgoub, Hazem Raafat,
Mohsen Rashwan and Amir Atyia.(2015)”Word representations in vector space and
their applications for arabic”. International Conference on Intelligent Text Process-
ing and Computational Linguistics, Springer. 430–443.
17. Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Armand Joulin and Tomas
Mikolov.(2018)”Learning word vectors for 157 languages”. Proceedings of the Inter-
national Conference on Language Resources and Evaluation, LREC 2018.
18. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina
Toutanova.(2019)”BERT: Pre-training of deep bidirectional transformers for
language understanding”.Proceedings of the 2019 Conference of the North Ameri-
can Chapter of the Association for Computational Linguistics: Human Language
Technologies, Minneapolis, Minnesota, June. Association for Computational
Linguistics.(1). 4171–4186.
19. Attia, Mohammed.(2007).”Arabic tokenization system”. Proceedings of the 2007
workshop on computational approaches to semitic languages: Common issues and
resources.65-72.
20. Abdelali, Ahmed and Darwish, Kareem and Durrani, Nadir and Mubarak,
Hamdy.(2016).”Farasa: A Fast and Furious Segmenter for Arabic”.Proceedings of
the 2016 Conference of the North American Chapter of the Association for Com-
putational Linguistics: Demonstrations.Association for Computational Linguistics.
https://www.aclweb.org/anthology/N16-3003. doi = ”10.18653/v1/N16-3003”.11-
16.
12 H. Chouikhi et al.

21. Habash, Nizar and Rambow, Owen.(2005).”Arabic tokenization, part-of-speech


tagging and morphological disambiguation in one fell swoop”. Proceedings of the
43rd annual meeting of the association for computational linguistics (ACL’05).573-
580.
22. Monroe, Will and Green, Spence and Manning, Christopher D.(2014) ”Word seg-
mentation of informal Arabic with domain adaptation”. Proceedings of the 52nd
Annual Meeting of the Association for Computational Linguistics (Volume 2: Short
Papers).206-211.
23. Mahmoud Nabil, Mohamed Aly, and Amir Atiya.(2015)”ASTD: Arabic sentiment
tweets dataset”. Proceedings of the 2015 conference on empirical methods in natural
language processing.2515–9.
24. Ashraf Elnagar, Yasmin S. Khalifa and Anas Einea.(2018)”Hotel Arabic-reviews
dataset construction for sentiment analysis applications”. Intelligent Natural Lan-
guage Processing: Trends and Applications, Springer.35–52.
25. Mohamed Aly and Amir Atiya.(2013)”LABR: A Large Scale Arabic Book Re-
views Dataset”.Meetings of the Association for Computational Linguistics (ACL)
At: Sofia, Bulgaria.
26. Alomari, Khaled Mohammad and ElSherif, Hatem M and Shaalan, Khaled.(2017)
”Arabic tweets sentimental analysis using machine learning”. Proceedings of the In-
ternational Conference on Industrial, Engineering and Other Applications of Applied
Intelligent Systems,Montreal, Canada.602–610.
27. Baly, Ramy and Khaddaj, Alaa and Hajj, Hazem and El-Hajj, Wassim and Shaban,
Khaled Bashir.(2019).”Arsentd-lev: A multi-topic corpus for target-based sentiment
analysis in arabic levantine tweets”. arXiv preprint arXiv:1906.01830.
28. Ramy Eskander and Owen Rambow.(2015)”SLSA: A sentiment lexicon for stan-
dard Arabic”.EMNLP.2545–2550.
29. Abdelghani Dahou,Mohamed Abd Elaziz and Junwei Zhou.(2019)”Arabic Senti-
ment Classification Using Convolutional Neural Network and Differential Evolution
Algorithm”.Computational Intelligence and Neuroscience.
30. Salima Harrat, Karima Meftouh and Kamel Smaili.(2019)”Machine translation for
Arabic dialects (survey).Inf. Process. Manage.56(2):262–273.

You might also like