Authors:
Cristian David Estupiñán Ojeda
;
Cayetano Nicolás Guerra Artal
and
Francisco Mario Hernández Tejera
Affiliation:
University Institute SIANI, University of Las Palmas de Gran Canaria, 35017, Las Palmas de Gran Canaria, Spain
Keyword(s):
Deep Learning, Linear Transformer, Informer, Convolution, Self Attention, Organization, Neural Machine Translation.
Abstract:
The use of architectures based on transformers presents a state of the art revolution in natural language processing (NLP). The employment of these architectures with high computational costs has increased in the last few months, despite the existing use of parallelization techniques. This is due to the high performance that is obtained by increasing the size of the learnable parameters for these kinds of architectures, while maintaining the models’ predictability. This relates to the fact that it is difficult to do research with limited computational resources. A restrictive element is the memory usage, which seriously affects the replication of experiments. We are presenting a new architecture called Informer, which seeks to exploit the concept of information organization. For the sake of evaluation, we use a neural machine translation (NMT) dataset, the English-Vietnamese IWSLT15 dataset (Luong and Manning, 2015). In this paper, we also compare this proposal with architectures tha
t reduce the computational cost to O(n · r), such as Linformer (Wang et al., 2020). In addition, we have managed to improve the SOTA of the BLEU score from 33.27 to 35.11.
(More)