Nothing Special   »   [go: up one dir, main page]

Computer Science and Information Systems 2024 Volume 21, Issue 3, Pages: 899-921
https://doi.org/10.2298/CSIS230912017M
Full text ( 261 KB)


Reaching quality and efficiency with a parameter-efficient controllable sentence simplification approach

Menta Antonio (E.T.S.I. Informática (UNED) C. de Juan del Rosal, Madrid, Spain), amenta1@alumno.uned.es
Garcia-Serrano Ana (E.T.S.I. Informática (UNED) C. de Juan del Rosal, Madrid, Spain), agarcia@lsi.uned.es

The task of Automatic Text Simplification (ATS) aims to transform texts to improve their readability and comprehensibility. Current solutions are based on Large Language Models (LLM). These models have high performance but require powerful computing resources and large amounts of data to be fine-tuned when working in specific and technical domains. This prevents most researchers from adapting the models to their area of study. The main contributions of this research are as follows: (1) proposing an accurate solution when powerful resources are not available, using the transfer learning capabilities across different domains with a set of linguistic features using a reduced size pre-trained language model (T5-small) and making it accessible to a broader range of researchers and individuals; (2) the evaluation of our model on two well-known datasets, Turkcorpus and ASSET, and the analysis of the influence of control tokens on the SimpleText corpus, focusing on the domains of Computer Science and Medicine. Finally, a detailed discussion comparing our approach with state-of-the-art models for sentence simplification is included.

Keywords: Text Simplification, Transfer Learning, Language Models


Show references