Nothing Special   »   [go: up one dir, main page]

×
Please click here if you are not redirected within a few seconds.
Abstract. Relative Entropy-based pruning has been shown to be efficient for pruning language models for more than a decade ago. Recently, this method has ...
From the results, we observe that using relative entropy pruning, we obtain better translation quality in terms of BLEU than significance pruning until 20%, ...
PDF | On Dec 1, 2012, Wang Ling and others published Improving Relative-Entropy Pruning using Statistical Significance | Find, read and cite all the ...
1 Introduction · 2 Combining Relative Entropy and Significance Pruning. 2.1 Relative Entropy Pruning; 2.2 Significance Pruning; 2.3 Error Analysis; 2.4 ...
[exact test] In statistics, an exact (significance) test is a test where all assumptions, upon which the derivation of the distribution of the test statistic is ...
Improving Relative-Entropy Pruning using Statistical Significance · no code implementations • COLING 2012 • Wang Ling, Nadi Tomeh, Guang Xiang, Isabel ...
Improving Relative-Entropy Pruning using Statistical Significance Wang Ling, Nadi Tomeh, Guang Xiang, Alan W. Black, Isabel Trancoso
It is shown that the relative entropy resulting from pruning a single N-gram can be computed exactly and efficiently for backoff models and shown that a ...
Jul 25, 2024 · Entropy-based loss terms are developed to improve dense and convolutional model accuracy and efficiency by promoting the ideal entropy patterns.
A criterion for pruning parameters from N-gram backoff language models is developed, based on the relative entropy between the orig-.
Missing: Significance. | Show results with:Significance.