Abstract
The approximation of the function 1 / x by exponential sums has several interesting applications. It is well known that best approximations with respect to the maximum norm exist. Moreover, the error estimates exhibit exponential decay as the number of terms increases. Here we focus on the computation of the best approximations. In principle, the problem can be solved by the Remez algorithm, however, because of the very sensitive behaviour of the problem the standard approach fails for a larger number of terms. The remedy described in the paper is the use of other independent variables of the exponential sum. We discuss the approximation error of the computed exponential sums up to 63 terms and hint to a webpage containing the corresponding coefficients.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
At the first sight, the problem considered in this paper has an obvious solution. The best approximation of functions as 1 / x by exponential sums
with respect to the maximum norm is well studied (cf. [3]). Even rather precise error estimates are known (cf. [4]). The approximation problem can be solved by the Remez algorithm which leads to a system of nonlinear equations. Since there is exactly one solution and the involved functions are analytic, the Newton method should be a perfect solver ensuring quadratic convergence.
This may be true for small values of k, but for \(k\rightarrow \infty \) the exponential decay of the approximation error \(e_{k}(x)=1/x-E_{k}(x)\) has a negative effect. The Remez algorithm requires exponential sums \(E_{k}\) which interpolate the function 1 / x at 2k points (in this case we say that \(E_{k}\) is feasible). Since \(e_{k}\) is rather small, tiny perturbations of \(E_{k}\) can lead to a difference \(e_{k}\) with less than 2k zeros. Note that the computation of \(e_{k}\) leads to a cancellation of the leading digits. The resulting relative error is \(\eta :=\mathrm {eps/}\left\| e_{k} \right\| \) (\(\mathrm {eps}\): machine precision), provided that 1 / x and \(E_{k}(x)\) are exact machine numbers. Let \(\tilde{e}_{k}\) be the floating-point result. Taking into account that the computation of \(E_{k}\) by the standard Remez algorithm is ill-conditioned, the final relative error of \(\tilde{e}_{k}\) may become larger than one so that \(\tilde{e}_{k}\) has the wrong sign.
Another difficulty is the fact that the error \(e_{k}\) may be much smaller than \(\sqrt{\mathrm {eps}}\). This fact prevents quadratic convergence.
Because of these difficultiesFootnote 1 the best \(L^{\infty }\) approximation is often replaced by least squares approximations (cf. [7, 13]). Concerning ill-posedness of this least squares problem we refer to [17]. Furthermore, Sect. 2.3 will show that there are applications requiring the maximum norm, while the \(L^{2}\) norm is insufficient.
In Sect. 3 we recall the facts about the best approximation by exponential sums. To apply the Remez algorithm we introduce a crucial ‘trick’ in Sect. 4.3. We use other variables than the coefficients in (1.1). As a consequence, the represented exponential sums are always feasible. The drawback is an increased computational cost, since we apply an outer Newton method involving several inner Newton iterations per outer iteration step. The best approximation refers to an underlying interval [1, R]. In Sect. 4.6 we describe how to proceed from one R to another \(R^{\prime }\) so that always good initial values are available. Another ‘continuation’ is the step from k to \(k+1.\)
The tables in this contribution show the approximation error in dependence of the parameters k and R. They are part of a large collection of best approximations as described in Sect. 2.4. The obtained approximation errors are compared with the theoretical bounds.
2 Definition, properties, and applications
2.1 Definition
Exponential sums are expressions of the form (1.1) with 2r parametersFootnote 2\(a_{\nu }\) and \(b_{\nu }\). Here we discuss the approximation of the function 1 / x by exponential sums in a positive interval \([a,b]\subset (0,\infty ]\) with respect to the maximum norm \(\left\| \cdot \right\| _{\infty ,[a,b]}\). Note that \(b=\infty \) is permitted. Let \(E_{k}^{*}\) be the best approximation. To be precise, its coefficients \(a_{\nu },\)\(b_{\nu }\) in (1.1) depend on the underlying interval [a, b], i.e., \(a_{\nu }=a_{\nu ,[a,b]}\) and \(b=b_{\nu ,[a,b]}.\) The minimal error is denoted by
2.2 Properties
Using the map of \(x\in [a,b]\) to \(x^{\prime }:=x/a\in [1,b/a],\) one finds that the best approximations in [a, b] and [1, b / a] and their approximation errors are related by
where we introduce \(R:=b/a.\) Hence, it is sufficient to study the approximation problem for different values of R and k. It turns out that also the choice
makes sense.
The optimal exponential sums allow a stable evaluation since the coefficients \(a_{\nu }\), \(b_{\nu }\) are positive.
2.3 Applications
The typical property \(\exp (x+y)=\exp (x)\exp (y)\) of the exponential function shows that \(E_{k}(\sum _{\mu }x_{\mu })=\sum _{\nu =1}^{k}a_{\nu }\prod _{\mu } \exp (-b_{\nu }x_{\mu })\) is a sum of separable terms.
A direct application is used in the second-order Møller–Plesset theory (MP2), where the energy is a sum of terms \(\frac{\ldots }{\varepsilon _{a}+\varepsilon _{b}-\varepsilon _{i}-\varepsilon _{j}}\) (cf. [1] and [16]). The effect of using \(E_{k}^{*}\) is demonstrated by a simpler example. Consider \(S:=\sum _{i,j=1}^{N} \frac{v_{i}v_{j}}{\varepsilon _{i}+\varepsilon _{j}}\) with large N, where \(\varepsilon _{i}+\varepsilon _{j}\in [a,b].\) The computation of S costs \({\mathcal {O}}(N^{2})\) operations. Replacing \(\frac{1}{\varepsilon _{i}+\varepsilon _{j}}\) by \(E_{k}^{*}=E_{k,[a,b]}^{*}\) yields
which can be evaluated by \({\mathcal {O}}(Nk)\) operations.
The inverse matrix \(A^{-1}\) can be approximated by \(E_{k}^{*}(A)\) with \(E_{k}^{*}=E_{k,[a,b]}^{*}\), provided that the spectrum of A is contained in [a, b]. If \(A=TDT^{-1}\) (D diagonal), the estimate with respect to the spectral matrix is
The maximum norm \(\left\| \tfrac{1}{\cdot }-E_{k}^{*}\right\| _{\infty ,[a,b]}\) cannot be replaced by a weaker norm as the \(L^{2}\) norm. The replacement of the inverse \(A^{-1}\) by \(E_{k}^{*}(A)\) is helpful for Kronecker matrices. Let \(A_{i}\) be positive definite matrices. Then the Kronecker matrix \(\mathbf {A}:=A_{1}\otimes I\otimes I+I\otimes A_{2}\otimes I+I\otimes I\otimes A_{3}\) has the approximate inverse
(cf. [10, Prop. 9.34]) with the spectral norm \(\left\| \mathbf {A}^{-1}-\mathbf {B}_{k}\right\| _{2}\le \left\| \tfrac{1}{\cdot }-E_{k}^{*}\right\| _{\infty ,[a,b]},\) where [a, b] contains the spectrum of \(\mathbf {A}.\)
2.4 Available data
This paper contains tables of the approximation errors \(\varepsilon _{[1,R]}(k)\) for \(1\le k\le 56\) and various R. The data for \(1\le k\le 7\) are shown in Table 1. The parameter R takes all values \(R=n\cdot 10^{m}\) (\(n\in \mathbb {N},\)\(m\in \mathbb {N}_{0}:=\mathbb {N}\cup \{0\}\)), subject to the following bounds. The largest value is \(R=R_{k}^{*},\) where \(R_{k}^{*}\) is explained in Sect. 3.3. Larger values of R are uninteresting since \(E_{k,[1,R]}^{*}=E_{k,[1,R_{k}^{*}]}^{*}\) for \(R\ge R_{k}^{*}.\) Besides \(R\ge 2\) the lower bound is implicitly given by \(\varepsilon _{[1,R]}(k)\approx 10^{-16}.\) Such values are available for all \(1\le k\le 56\) (see web page https://www.mis.mpg.de/scicomp/EXP_SUM/1_x/). The Tables 2, 3, 4, 5, 6, 7 and 8 given here contain only results for R being powers of 10. The additional Table 9 is restricted to \(R=\infty \) and \(57\le k\le 63\).
The web page https://www.mis.mpg.de/scicomp/EXP_SUM/1_x/ contains a complete table (see file ‘tabelle’). For each pair \(\left( k,R\right) \) contained in the table there is a fileFootnote 3 with the coefficientsFootnote 4\(a_{\nu },\)\(b_{\nu }\) (\(1\le \nu \le k\)). The file contains additional data which are important as input for the computer program. In particular, the points \(\xi _{i}\) (\(1\le i\le 2k\)) with \(1/\xi _{i}=E_{k}^{*}(\xi _{i})\) are given (cf. Sect. 4.3).
Approximations of other functions are mentioned in [8].
3 Existence and error estimates
3.1 Existence and equioscillation
The approximation problem is closely related to the interpolation by exponential sums. Because of the nonlinear nature, an interpolating exponential may fail to exist (example: \(f(x)=x\) cannot by interpolated by some \(E_{1}\) at \(x=\pm 1).\) For sufficient conditions we refer to [3, §VI.3].
Since \(f(x)=1/x\) is completely monotone for \(x>0\), i.e., \(\left( -1\right) ^{n}f^{(n)}(x)>0\) for all \(n\in \mathbb {N}_{0},\) the unique existence of the best approximation \(E_{k}^{*}\) is guaranteed (cf. [3, §VII]). Moreover, \(E_{k}^{*}\) satisfies the equioscillation property which is well known for polynomials (cf. de la Vallée Poussin [6, p. 85], Süli–Mayers [15, Theorem 8.3]): The error \(e_{k}:=\frac{1}{\cdot }-E_{k} ^{*}\) is extreme at \(2k+1\) points \(\mu _{i}\) with
and
Each interval \(\left( \mu _{i}-1,\mu _{i}\right) \) contains exactly one zero of \(e_{k},\) i.e., there are \(\xi _{i}\in \left( \mu _{i}-1,\mu _{i}\right) \) with \(e_{k}(\xi _{i})=0.\) The latter equation states the interpolation property
3.2 Error estimate for finite R
The precise error estimates involve elliptic integrals (cf. [4, Sect. 2]). Estimating these special functions by exponentials yields
(cf. [4, (2.9)], [5, (29)]). A comparison with the numbers from Tables 1, 2, 3, 4, 5, 6, 7, 8 and 9 show that this bound can be improved by \(O\left( \frac{1}{\sqrt{R}}\exp (-\frac{\pi ^{2}k}{\log (6R)})\right) \). All errors computed in Sect. 2.4 satisfy
The small ratios occur for small R. We do not claim that a similar inequality holds for \(k\rightarrow \infty .\)
3.3 Case of \(R=\infty \)
Consider the error \(e_{k,[1,R]}:=\frac{1}{\cdot }-E_{k,[1,R]}^{*}\) for varying R. For sufficiently small R, the last extremum \(e_{k,[1,R]} (\mu _{2k})=\varepsilon _{[1,R]}(k)\) is located at \(\mu _{2k}=R\) with \(\frac{\mathrm {d}}{\mathrm {d}x}e_{k,[1,R]}(\mu _{2k})>0.\) As R increases, \(\frac{\mathrm {d}}{\mathrm {d}x}e_{k,[1,R]}(\mu _{2k})\) decreases until, for a certain \(R=R_{k}^{*},\)\(\frac{\mathrm {d}}{\mathrm {d}x}e_{k,[1,R]}(\mu _{2k})=0\) holds. Then \(|e_{k,[1,R_{k}^{*}]}(x)|\le \varepsilon _{[1,R_{k}^{*}]}(k)\) also holds for \(R\ge R_{k}^{*}\) (cf. Fig. 2). Hence
For \(R=\infty \) the error can be proved to be bounded by
(cf. [5, (30)]). The values \(e_{k,[1,R_{k}^{*} ]}(k)=e_{k,[1,\infty ]}(k)\) shown in the tables behave better. The function \(\log (2+k)\exp (-\pi \sqrt{2k}\,)\) describing the asymptotic decay is proposed by D. Braess. The approximation errors for \(1\le k\le 63\) satisfy the two-sided inequality
Again, we do not claim that the inequality is valid for \(k\rightarrow \infty .\)
4 Computation
4.1 Machine precision
The coefficients of \(E_{k}^{*}\) given in https://www.mis.mpg.de/scicomp/EXP_SUM/1_x/ are computed with extended precision (\(\mathrm {eps}=1_{10}{\text {-}}19\)). This fact allows us to reach approximations with \(\varepsilon _{[1,R]}\approx 10^{-16}\) and better.Footnote 5 Of course, using the corresponding coefficients in double precision, the floating-point errors of the function evaluation may be larger than \(\varepsilon _{[1,R]}.\)
Using standard double precision, computations up to \(\varepsilon _{[1,R]}\approx 10^{-13}\) can be expected.
4.2 Remez algorithm
Usually the Remez algorithm is applied to the \(L^{\infty }\) approximation by polynomials or rational functions (cf. [18]). Approximation by nonlinear families is studied by Barrar and Loeb [2].
4.2.1 Polynomial case
First we recall the original Remez algorithm (cf. [14]) applied to the best polynomial approximation \(\Vert f-P_{n}^{*} \Vert _{\infty ,[a,b]}=\min \{\Vert f-P_{n}\Vert _{\infty ,[a,b]}:\)\(P_{n}\) polynomial of degree \(\le n\}.\) We start with a polynomial \(P_{n}^{(0)}\) interpolating f at \(n+1\) points in (a, b):
Algorithm: Let \(\mu _{j}^{(0)}\)\((0\le j\le n+1)\) be the extremal points of \(f-P_{n}^{(0)}\) in the intervals \([a,\xi _{1}),\)\((\xi _{1},\xi _{2}),\ldots ,(\xi _{n+1},b]\). Then
is a linear system of \(n+2\) equations for the \(n+1\) coefficients of the next polynomial \(P_{n}^{(1)}\) and the constant E. The oscillatory error \((-1)^{j}E\) ensures the existence of \(n+1\) distinct interpolation points \(\xi _{i}\in (\mu _{i-1}^{(0)},\mu _{i}^{(0)}).\) Now we repeat the algorithm with the polynomial \(P_{n}^{(1)}\) and new extremal points \(\mu _{j}^{(1)}\) of \(f-P_{n}^{(1)}\) instead of \(P_{n}^{(0)}\) and \(\mu _{j}^{(0)}.\)
The limiting polynomial \(P_{n}^{*}\) satisfies \(P_{n}^{*}(\mu _{j}^{*})+(-1)^{j}E^{*}=f(\mu _{j}^{*})\) at the extremal points \(\mu _{j}^{*}\) of \(f-P_{n}^{*},\) while \(|E^{*}|\) is the maximum norm error.
Eliminating E in (4.2), we obtain the equivalent system of \(n+1\) equations
4.2.2 Case of exponential sums
Instead of \(n+1\) polynomial coefficients we now have 2k coefficients of \(E_{k}.\) The equioscillation property yields the necessary 2k equations. Condition (3.1) implies that
(cf. (4.3)). This is the basis of the Remez algorithm. To apply the iterative algorithm, one has to start from a function \(E_{k}\) as in (1.1) such that the difference \(e_{k}=\frac{1}{\cdot }-E_{k}\) has 2k simple zeros \(\xi _{i}\) (\(1\le i\le 2k\)). We call such an exponential sum a feasible\(E_{k}.\)
Next one has to determine the extrema \(e_{k}(\mu _{i}).\) The arguments \(\mu _{i}\) belong to \(\left( \xi _{i},\xi _{i+1}\right) \) for \(1\le i\le 2k-1.\) The extremum \(e_{k}(\mu _{0})\) is taken at the boundary: \(\mu _{0}=1.\) If \(R\le R_{k}^{*},\) the extremum \(e_{k}(\mu _{2k})\) belongs to \(\mu _{2k}=R.\) If \(R>R_{k}^{*},\)\(\mu _{2k}\) lies in \(\left( \xi _{2k},\infty \right) \).
Given the values \(\mu _{i},\) (4.4) represents 2k equations for the 2k coefficients. This yields a new \(E_{k}^{\mathrm {new}}.\) If \(E_{k} ^{\mathrm {new}}\) is feasible, one can determine the new \(\mu _{i}\) etc.
The main difference to the polynomial case in Sect. 4.2.1 is the nonlinear nature of the system (4.4).
4.3 Choice of variables
It seems to be obvious to use the parameters \(a_{\nu }\), \(b_{\nu }\) in (1.1) for the computation, i.e., \(E_{k}=E_{k}(\cdot ;\mathbf {a}),\) where
Inserting \(E_{k}(\cdot ;\mathbf {a})\) into (4.4) yields 2k nonlinear equations \(\phi _{i}(\mathbf {a})=0\) (\(1\le i\le 2k\)). In theory, Newton’s method should converge to the desired solution \(E_{k}^{*}.\) However, that does not work in practice. This is why the computation of best \(L^{\infty }\) exponential sums is regarded as hardly solvable.
The cause of the difficulty is explained as follows. Due to the good approximation, \(\varepsilon _{[1,R]}(k)\) is small. Assume that there is an iterate \(E_{k}\) with \(\Vert \frac{1}{\cdot }-E_{k}\Vert _{\infty }\le 10^{-13}.\) Then a perturbation of \(E_{k}\) by a tiny shift of, e.g., \(2\cdot 10^{-13}\) may yield an \(E_{k}^{\mathrm {new}}\) which does not interpolate \(\frac{1}{\cdot }.\) Hence there are no zeros \(\xi _{i}\) (or at least not 2k of them), \(E_{k}^{\mathrm {new}}\) is infeasible, and the algorithm cannot be continued.
Instead we introduce other variables describing \(E_{k}.\) Since the interpolating exponential sum is unique, we describe a feasible exponential sum \(E_{k}\) by the interpolation points \(\xi _{i}.\) Using the vector
we can express a feasible \(E_{k}\) by
Given a feasible \(E_{k}(\cdot ;\mathbf {a}),\) we can determine the zeros \(\xi _{i}\) of \(\frac{1}{\cdot }-E_{k}(\cdot ;\mathbf {a})=0\) and obtain \(\varvec{\xi }=\varvec{\xi }(\mathbf {a}),\) i.e., \(E_{k}(\cdot ;\mathbf {a})=\hat{E}_{k}(\cdot ;\varvec{\xi }(\mathbf {a})).\) On the other hand, given \(\varvec{\xi },\) the interpolating \(E_{k}(\cdot ;\mathbf {a})\) can be determined by Newton’s method. This yields the inverse mapping \(\mathbf {a}=\mathbf {a}(\varvec{\xi })\); i.e., \(\hat{E}_{k}(\cdot ;\varvec{\xi })=E_{k}(\cdot ;\mathbf {a}(\varvec{\xi })).\)
4.4 Computation of \(\hat{E}_{k}(\cdot ;\varvec{\xi })\): inner iteration
For the practical computation of \(\hat{E}_{k}(\cdot ;\varvec{\xi })\), one uses the pair \((\varvec{\xi },\mathbf {a}(\varvec{\xi })).\) If one wants to determine \(\hat{E}_{k}(\cdot ;\varvec{\xi }^{\prime })\) for \(\varvec{\xi }^{\prime }\) close to \(\varvec{\xi },\) one has to solve \(E_{k}(\xi _{i}^{\prime };\mathbf {a}^{\prime })=1/\xi _{i}^{\prime }\) with respect to \(\mathbf {a}^{\prime }.\) Here we exploit the fact that the Newton method has a starting value \(\mathbf {a}=\mathbf {a}(\varvec{\xi })\) very close to \(\mathbf {a}^{\prime }=\mathbf {a}(\varvec{\xi }^{\prime })\).
The interpolation is harder to compute if \(\xi _{i}\approx \xi _{i+1}\) are very close. However, the zeros \(\xi _{i}\) of the best approximation \(E_{k}^{*}\) are well separated (cf. Remark 4.2).
Note that the use of the parameters \(\varvec{\xi }\) ensures that \(\hat{E}_{k}(\cdot ;\varvec{\xi })\) is a feasible exponential sum. The drawback of this approach is a larger computational work. Instead of the evaluation of \(E_{k}(\cdot ;\mathbf {a})\) the Newton method requires several evaluations of exponential sums and their derivatives (which are again exponential sums).
In the following, iterates \(\varvec{\xi }^{0},\varvec{\xi }^{1},\ldots \) of an outer iteration will appear. Each \(\varvec{\xi }^{\nu }\) requires an inner iteration for the computation of \(\mathbf {a} (\varvec{\xi }^{\nu })\) by the Newton method described above. A standard value of the number of inner iteration steps used in Sect. 2.4 is 4.
4.5 Computation of the extrema
In principle, given \(\varvec{\xi }\) and \(\mathbf {a}=\mathbf {a} (\varvec{\xi }),\) the location \(\mu _{i}\) of the extrema can be determined by Newton’s method. For the results given in the tables the extrema are computed differently. In each interval \([\xi _{i-1},\xi _{i}]\) the values at \(x_{\nu }=\xi _{i-1}+\nu \left( \xi _{i}-\xi _{i-1}\right) /N\) are evaluated for \(\nu =1,\ldots ,N-1.\) Let \(\nu ^{*}\) be the index with \(x_{\nu ^{*} }={\text {argmax}}_{\nu }|e_{k}(x_{\nu })|\) and let P the cubic interpolation at \(x_{\nu ^{*}-1},x_{\nu ^{*}},x_{\nu ^{*}+1}.\) The location of the maximum of P is taken as \(\mu _{i}.\)
Note that the evaluation of \(e_{k}\) around the extremal point \(\mu _{i}\) is harmless because of \(\frac{\mathrm {d}}{\mathrm {d}x}e_{k}(\mu _{i})=0.\) Already the choice of \(x_{\nu ^{*}}\) leads to an error \(|e_{k}(x_{\nu ^{*} })|=(1+{\mathcal {O}}(N^{-2}))\max _{\xi _{i-1}\le x\le \xi _{i}}|e_{k} (x)|+{\mathcal {O}}(\mathrm {eps}).\) Cubic interpolation leads to \(|e_{k}(\mu _{i}^{\mathrm {cubic}})|=(1+{\mathcal {O}}(N^{-8}))\max _{\xi _{i-1}\le x\le \xi _{i}}|e_{k}(x)|+{\mathcal {O}}(\mathrm {eps}).\) Here \({\mathcal {O}} (\mathrm {eps})\) is the unavoidable cancellation error.
Remark 4.1
Since the computational work increases with N, only the final data should be performed with a large N. Intermediate steps can be done with small N.
The values in the tables are based on \(N=1000.\)
4.6 Newton’s method for (4.4) and continuation principle
4.6.1 The outer iteration
The Newton method for solving (4.4) is formulated with respect to the parameters \(\varvec{\xi }.\) Equation (4.4) becomes \(0=e_{k} (\mu _{i-1})+e_{k}(\mu _{i})=\frac{1}{\mu _{i-1}}-\hat{E}_{k}(\mu _{i-1} ;\varvec{\xi })+\frac{1}{\mu _{i}}-\hat{E}_{k}(\mu _{i};\varvec{\xi })=:\phi _{i}(\varvec{\xi }).\) The evaluation of \(\phi _{i}(\varvec{\xi })\) is explained in Sect. 4.3. The derivatives \(\frac{\partial }{\partial \xi _{j}}\phi _{i}(\varvec{\xi })\) required by the Newton method are replaced by divided differences.
We call this iteration the outer iteration since each evaluation of \(\hat{E}_{k}(\cdot ;\varvec{\xi })\) for a new \(\varvec{\xi }\) requires inner iteration steps according to Sect. 4.4.
If the Newton iteration is not successful, it is replaced by the damped version. The damping parameter should be chosen such that
-
the approximation error \(\left\| e_{k}\right\| _{\infty }\) is decreasing,
-
the zeros \(\xi _{i}\) of \(e_{k}\) should not come too close.
The reason for the last advice is the fact that the \(\xi _{i}\) values of the best approximation \(E_{k,[1,R]}^{*}(\cdot ,\varvec{\xi })\) are well separated. More precisely, the following observation holds.
Remark 4.2
In a first approximation, the zeros behave as \(\xi _{i}\approx R^{(i/(2k))^{c}}\) with c between 1.2 and 1.3.
If it happens that \(\xi _{i}\) and \(\xi _{i+1}\) are too close, one should change the positions, e.g., by \(\xi _{i+1}:=(\xi _{i}+\xi _{i+2})/2\).
The continuation method explained in Sect. 4.6.3 ensures that the Newton method can be started with initial iterates close to the solution.
4.6.2 Start
The process is started by computations for \(k=1.\) Consider, e.g., \(R=2.\) A simple choice of interpolation points is \(\xi _{1}=4/3\) and \(\xi _{1}=5/3.\) The computation of \(\mathbf {a}(\varvec{\xi })\) is harmless. For all initial values \(0.1\le a_{1},b_{1}\le 5\), the Newton method converges (to \(a_{1}=1.831\ldots ,\ b_{1}=0.6694\ldots \)). After 6 steps of the outer Newton method for solving (4.4), one obtains the best approximation up to machine precision.
Given any best approximation \(E_{k,[1,R]}^{*}=E_{k,[1,R]}^{*} (\cdot ,\varvec{\xi }^{*})\) together with the interpolation points \(\varvec{\xi }^{*},\) one can obtain \(E_{k,[1,R^{\prime }]}^{*}\) for other \(R^{\prime }\) according to Sect. 4.6.3.
4.6.3 Change of R
In the following, k is fixed. Assume that a best approximation \(E_{k,[1,R]}^{*}\) is already available for some R.
The first task is to compute \(E_{k,[1,R^{\prime }]}^{*}\) for a larger\(R^{\prime }>R.\) The approximation error of \(E_{k,[1,R]}^{*}\) is \(\varepsilon _{[1,R]}(k).\) Take \(E_{k}:=E_{k,[1,R]}^{*}\) as initial value for the outer iteration of Sect. 4.6.1 on \([1,R^{\prime }].\) If \(R^{\prime }\le R_{k}^{*},\) the maximum in the last subinterval \([\xi _{2k},R^{\prime }]\) is taken at \(x=R^{\prime }.\) If \(R^{\prime }\) is not close enough to R, the maximum \(e_{k}(R^{\prime })\) may be much larger than \(\varepsilon _{[1,R]}(k)\) and the Newton method may fail. In that case one has apply the continuation method: compute \(E_{k,[1,R_{m}]}^{*}\) for a sequence \(R=R_{0}<R_{1}<\cdots <R_{M}=R^{\prime },\) where each \(R_{m+1}\) is sufficiently close to \(R_{m}.\)
If it happens that the last extremum is at \(\mu _{2k}<R^{\prime },\) one has obtained the best approximation in \([1,\infty ]\) and \(\mu _{2k}=R_{k}^{*}\) holds (cf. Sect. 3.3).
In the case of a smaller\(R^{\prime }<R\) the same continuation method can be used. However, the new value \(R^{\prime }\) should be larger than \(\xi _{2k}\). Otherwise, the interval \([1,R^{\prime }]\) contains less than 2k values \(\xi _{i}\) of the vector \(\varvec{\xi }\) defining the initial value \(E_{k,[1,R]}^{*}=E_{k}(\cdot ,\varvec{\xi })\) and the Newton iteration may fail.
For the intermediate computations with \(R=R_{0}<R_{1}<\cdots <R_{M}=R^{\prime }\) one may save computational work by Remark 4.1.
For k large and R small, the points \(\xi _{i}\) are rather close. In this case, the restriction \(\xi _{2k}<R^{\prime }<R\) for the new value \(R^{\prime }\) is very restrictive. Here, another strategy can be applied. The affineFootnote 6 map \(x\mapsto 1+\frac{R^{\prime }-1}{R-1}\left( x-1\right) \) maps [1, R] onto \([1,R^{\prime }].\) Applying this map to \(\varvec{\xi },\) one obtains a rough approximation of \(\varvec{\xi }^{\prime }\) corresponding to \(E_{k,[1,R^{\prime }]}^{*}=E_{k}(\cdot ,\varvec{\xi }^{\prime }).\) Note that \(\xi _{2k}^{\prime }<R^{\prime }\) holds. After defining \(\varvec{\xi }^{\prime },\) the coefficients \(\mathbf {a}^{\prime }=\mathbf {a}(\varvec{\xi }^{\prime })\) have to be computed by the inner iteration of Sect. 4.4. If the factor \(\frac{R^{\prime }-1}{R-1}\) is chosen too small, this iteration may fail to converge.
If one follows these hints, the outer Newton iteration even works without damping.
We illustrate these hints by two examples. First we start from the best approximation \(E_{k,[1,R]}^{*}\) for \(k=7,\)\(R=1000.\) This case is less sensitive, i.e., the Newton iteration behaves rather robust. For a larger \(R^{\prime }\in [R,10\cdot R]\) the outer Newton iteration converges. The choice of a smaller \(R^{\prime }\) is restricted by \(R^{\prime }>\xi _{14}=838.\) Much smaller \(R^{\prime }\) can be obtained by the second strategy using new \(\varvec{\xi }^{\prime }.\) Convergence is observed for amplification factors \(\frac{R^{\prime }-1}{R-1}\in [0.5,2].\)
A more sensitive example is \(E_{k,[1,R]}^{*}\) for \(k=50,\)\(R=2_{10}8\) since \(\varepsilon _{[1,2_{10}8]}(50)=4.43_{10}{\text {-}}14\). An increase of R leads to convergence as long as \(R^{\prime }\le 3.5_{10}8=\frac{7}{4}R.\) A decrease of R is restricted by \(R>\xi _{100}=1.967_{10}8.\) The second strategy is more helpful. Using the factor \(\frac{R^{\prime }-1}{R-1}\in [0.92,1.1],\) the inner iterations converge and enable the computation for \(R^{\prime }\in [1.84_{10}8,\,2.2_{10}8]\).
4.6.4 Increasing k
The continuation principle applied to the k-dependence means that we increase k only to the next integer. The step \(k\mapsto k+1\) is more delicate since two additional parameters must be created. They must be such that the exponential sum is feasible and has two additional zeros. The strategy which is successfully applied for computing the data of the tables is explained using an example with rather large k. Note that the difficulty increases with the size of k.
We consider the largest k appearing in Table 8: \(k=56.\) Since \(\varepsilon _{[1,R]}(56)\) increases with increasing R, the largest possible R is the best candidate for starting values: \(R=R_{56}^{*}=7.5_{10}12.\) The coefficients of \(E_{56,[1,R]}^{*}\) are
\(\nu \) | 1 | 2 | 3 | ... | 56 |
---|---|---|---|---|---|
\(a_{\nu }\) | 1.65\(_{10}\)\(-\)12 | 1.08\(_{10}\)\(-\)11 | 4.44\(_{10}\)\(-\)10 | ... | 5.7 |
\(b_{\nu }\) | 5.16\(_{10}\)\(-\)13 | 5.68\(_{10}\)\(-\)12 | 3.01\(_{10}\)\(-\)11 | ... | 14.4 |
The ansatz for \(k=57\) is \(E_{57}(x):=a_{0}\exp (-b_{0}x)+E_{56,[1,R]}^{*},\) i.e., we use the previous exponential sum supplemented by a further term. The index 0 emphasises that the new term is on the side of the very small coefficients. Afterwards the indices \(\{0,\dots ,56\}\) are replaced by \(\{1,\dots ,57\}\).
The additional term \(a_{0}\exp (-b_{0}x)\) should be so small that the equioscillation structure of \(E_{56,[1,R]}^{*}\) is not perturbed. Since \(\varepsilon _{[1,R]}(56)\approx 1_{10}\)-13, we need \(a_{0}\le 1_{10}\)-13. On the other hand, \(a_{0}\) should not be too small. The reason is the Newton process of Sect. 4.4 for computing \(\mathbf {a} (\varvec{\xi }).\) Here the derivative \(\frac{\mathrm {d}}{\mathrm {d}b_{0} }a_{0}\exp (-b_{0}x)=-a_{0}x\exp (-b_{0}x)\) should not be too close to zero.
A look at \(a_{\nu },\)\(b_{\nu }\) for \(\nu =1,2\) shows that they decay by a factor of about 10. This leads to the extrapolation
\(\frac{1}{\cdot }-E_{56,[1,R]}^{*}\) has the following zeros \(\xi _{i}\):
i | 1 | 2 | 3 | ... | 109 | 110 | 111 | 112 |
---|---|---|---|---|---|---|---|---|
\(\xi _{i}\) | 1.003 | 1.026 | 1.074 | ... | 159\(_{10}\)11 | 3.57\(_{10}\)11 | 9.27\(_{10}\)11 | 3.26\(_{10}\)12 |
One observes that the ratios are increasing. Therefore we introduce the two additional zeros
Since the new value of R must be larger than \(\xi _{114},\) we choose \(R=1_{10}15.\)
The first step are many (damped) Newton iterations for computing the coefficients \(\mathbf {a}(\varvec{\xi })\) and \(E_{57}\) for the new \(\varvec{\xi }\). It turns out that the last maximum is taken at \(\mu _{115}=1.9_{10}14.\) Accordingly we choose \(R=1.9_{10}14.\)
The first outer iteration requires a dampingFootnote 7 of the Newton correction by 1 / 8. The values \(\xi _{i}\) as well as \(\mu _{115}\) have decreased and \(R=\mu _{115}=5.1_{10}13\) can be chosen. About 4 damped outer Newton steps are needed, before the Newton method works without damping.
The choice of the initial values \(a_{0},\ b_{0},\)\(\xi _{2k-1},\xi _{2k}\) may be a matter of trial and error.
4.7 Modifications
4.7.1 Wavelet applications
In [9] an application in quantum mechanics is mentioned which involves exponential sums for \(1/\sqrt{x}.\) The technique explained in [9] works equally well for the function 1 / x.
Whenever scalar products \(\left\langle \frac{1}{\cdot },\varphi \right\rangle \) with wavelet functions \(\varphi \) appear, we can exploit the following property. Depending on the vanishing moment M of \(\varphi \), \(\left\langle p,\varphi \right\rangle =0\) holds for all polynomials p of degree \(\le M-1.\) This leads to the following approximation problem. Let \(F_{k,M}^{*}\) be the best \(L^{\infty }\) approximation of \(\frac{1}{\cdot }\) in the interval [1, R] within the family of functions
Obviously, the best approximation error improves with increasing M. As an example we show the errors for \(k=7,\)\(R=10\) (degree \(-1\) means \(p=0\)):
Polynomial degree | \(-1\) | 0 | 1 |
Approximation error | 2.344\(_{10}{\text {-}}8\) | 6.554\(_{10}{\text {-}}9\) | 1.934\(_{10}{\text {-}}9\) |
The best approximation \(F_{k,M}^{*}\) can be computed analogously to usual exponential sums. Note that the exponential part \(E_{k}\) in \(F_{k,M}^{*}=E_{k}+p\) is different from the best approximation \(E_{k}^{*}.\) After computing \(F_{k,M}^{*}=E_{k}+p,\) the polynomial part p can be omitted, since it is not needed for
Therefore the computational work for \(\left\langle E_{k},\varphi \right\rangle \) is independent of the degree of p.
4.7.2 Weighted norm
There may be reasons to prefer approximations with respect to a weighted norm
with \(\omega >0.\) In principle, one can apply the Remez algorithm after replacing \(e_{k}(\mu _{i-1})+e_{k}(\mu _{i})=0\) in (4.4) by \(e_{k} (\mu _{i-1})\omega (\mu _{i-1})+e_{k}(\mu _{i})\omega (\mu _{i})=0\).
The computation of a best approximation \(E_{k,[a,b]}^{*}\) is simplified by the reduction to \(E_{k,[1,b/a]}^{*}\) (cf. Sect. 2.2). In the case of a weight \(\omega ,\) the reduction to [1, b / a] requires that \(\omega \) is homogeneous:
Examples are all powers \(\omega (x)=x^{\gamma }\) (\(\gamma <1\) if \(b=\infty \)). Then the best approximation \(E_{k,[1,b/a]}^{*}\) in [1, b / a] also determines the best approximation \(E_{k,[a,b]}^{*}\) in [a, b] by
with the error
4.7.3 Quadrature
Since \(\frac{1}{x}=\int _{0}^{\infty }\exp (-xt)\mathrm {d}t,\) any quadrature formula \(Q(f)=\sum _{\nu =1}^{k}a_{\nu }f(b_{\nu })\) applied to the function \(f(\cdot )=\exp (-x\,\cdot )\) yields an exponential sum of the form (1.1).
A particular choice is the sinc quadrature. It requires an integral over \(\mathbb {R}.\) For instance, the substitution \(t=\exp (y)\) yields an integral of the desired form: \(\frac{1}{x}=\int _{-\infty }^{\infty }\exp (-x\exp (y))\exp (y)\mathrm {d}y.\) The sinc quadrature is defined and analysed in [11, Sect. D.4]. The drawback is that the quadrature is not adapted to the fact that the interesting parameters x belong to the interval [1, R]. The \(L^{\infty }\) error estimate is of the form \(\le c\exp (-c^{\prime }\sqrt{k})\) (cf. [4, (3.6)]). Figure 3 shows the error \(e_{k}\) for \(k=45.\) The error bound is 2.63\(_{10}{\text {-}}7\), while the best approximation—say in \([1,10^{6}]\)—is 3.913\(_{10}{\text {-}}15\). It is oscillating, but remains in the positive part, i.e., it is infeasible. Therefore it is not possible to use the quadrature result as starting value for the Remez algorithm.
Notes
In [12] the author writes: In general, the problem of finding a best uniform approximation ... is quite difficult, and consequently various schemes ... have sometimes been used to produce “good” if not best approximations.
Here we only consider real, in particular positive, coefficients.
For instance, the file k12.3E5 contains the coefficients corresponding to \(k=12\) and \(R=3_{10}5.\) The file with the largest value of R (k fixed) yields the best approximation for \([1,\infty ].\)
In the file, \(a_{\nu }\) and \(b_{\nu }\) are called omega[\(\nu \)] and alpha[\(\nu \)].
In the case of \(\varepsilon _{[1,R]}\approx 10^{-17}\) the extrema \(|e_{k}(\mu _{i})|\) start to be hard to equalise for small i where the cancellation effect is strongest. Nevertheless, the function \(E_{k}\) is still feasible. Then \(E_{k}\) is not the best approximation, but \(\Vert \frac{1}{\cdot }-E_{k}\Vert _{\infty }\le \max _{i}|e_{k}(\mu _{i})|\) holds.
An improvement could be a mapping making use of Remark 4.2.
The damping strategy uses the criteria of Sect. 4.6.1.
References
Ayala, P.Y., Scuseria, G.E.: Linear scaling second-order Moller–Plesset theory in the atomic orbital basis for large molecular systems. J. Chem. Phys. 110, 3660 (1999)
Barrar, R.B., Loeb, H.L.: On the Remez algorithm for non-linear families. Numer. Math. 15, 382–391 (1970)
Braess, D.: Nonlinear Approximation Theory. Springer, Berlin (1986)
Braess, D., Hackbusch, W.: Approximation of \(1/x\) by exponential sums in \([1,\infty )\). IMA J. Numer. Anal. 25, 685–697 (2005)
Braess, D., Hackbusch, W.: On the efficient computation of high-dimensional integrals and the approximation by exponential sums. In: DeVore, Ronald, A., Kunoth, A. (eds.) Multiscale, Nonlinear and Adaptive Approximation, pp. 39–74. Springer, Berlin (2009)
de la Vallée Poussin, C.-J.: Leçons sur l’approximation des fonctions d’une variable réelle. Gauthier-Villars, Paris (1919)
Evans, J.W., Gragg, W.B., LeVeque, R.J.: On least squares exponential sum approximation with positive coefficients. Math. Comput. 34, 203–211 (1980)
Hackbusch, W.: Entwicklungen nach Exponentialsummen. Techn. Bericht 25, Max-Planck-Institut für Mathematik in den Naturwissenschaften, Leipzig (2005)
Hackbusch, W.: Approximation of \(1/\left|x-y\right|\) by exponentials for wavelet applications. Computing 76, 359–366 (2006)
Hackbusch, W.: Tensor Spaces and Numerical Tensor Calculus, volume 42 of SSCM. Springer, Berlin (2012)
Hackbusch, W.: Hierarchical Matrices—Algorithms and Analysis, volume 49 of SSCM. Springer, Berlin (2015)
Kammler, D.W.: Chebyshev approximation of completely monotonic functions by sums of exponentials. SIAM J. Numer. Anal. 13, 761–774 (1976)
Kammler, D.W.: Least squares approximation of completely monotoic functions by sums of exponentials. SIAM J. Numer. Anal. 16, 801–818 (1979)
Remez, E.J.: Sur un procédé convergent d’approximations successives pour déterminer les polynômes d’approximation. Compt. Rend. Acad. Sci. 198, 2063–2065 (1934)
Süli, E., Mayers, D.F.: An Introduction to Numerical Analysis, 4th edn. Cambridge University Press, Cambridge (2008)
Takatsuka, A., Ten-no, S., Hackbusch, W.: Minimax approximation for the decomposition of energy denominators in Laplace-transformed Møller–Plesset perturbation theories. J. Chem. Phys. 129, 044112 (2008)
Varah, J.M.: On fitting exponentials by nonlinear least squares. SIAM J. Sci. Statist. Comput. 6, 30–44 (1985)
Werner, H.: Vorlesung über Approximationstheorie, volume 14 of Lect. Notes Math. Springer, Berlin (1966)
Acknowledgements
Open access funding provided by Max Planck Society.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Gabriel Wittum.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
OpenAccess This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Hackbusch, W. Computation of best \(L^{\infty }\) exponential sums for 1 / x by Remez’ algorithm. Comput. Visual Sci. 20, 1–11 (2019). https://doi.org/10.1007/s00791-018-00308-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00791-018-00308-4