Transseries: Composition, Recursion, and Convergence: G. A. Edgar November 1, 2018
Transseries: Composition, Recursion, and Convergence: G. A. Edgar November 1, 2018
Transseries: Composition, Recursion, and Convergence: G. A. Edgar November 1, 2018
November 1, 2018
Abstract
Additional remarks and questions for transseries. In particular: properties of com-
position for transseries; the recursive nature of the construction of R x ; modes of
convergence for transseries. There are, at this stage, questions and missing proofs in
the development.
Contents
1 Introduction 1
2 Well-Based Transseries 3
4 Properties of Composition 12
5 Taylor’s Theorem 20
9 Further Transseries 34
1 Introduction
Most of the calculations done with transseries are easy, once the basic framework is
established. But that may not be the case for composition of transseres. Here I will
discuss a few of the interesting features of composition.
The ordered differential field T = R x = R G of (real grid-based) transseries
is completely explained in my recent expository introduction [8]. A related paper is
1
[9]. Other sources for the definitions are: [1], [3], [7], [13], [15]. I will generally follow
the notation from [8]. Van der Hoeven [13] sometimes calls T the transline.
So T is the set of all grid-based real formal linear combinations of monomials from
G, while G is the set of all eL for L ∈ T purely large. (Because of logarithms, there is
no need to write separately two factors as xb eL .)
Notation 1.1. For transseries A, we already use exponents An for multiplicative powers,
and parentheses A(n) for derivatives. Therefore let us use square brackets A[n] for
compositional powers. In particular, we will write A[−1] for the compositional inverse.
Thus, for example, expn = exp[n] = log[−n] .
Write ln for logn if n > 0; write l0 = x; write ln = exp−n if n < 0.
Recall [8, Prop. 3.24 & Prop. 3.29] two canonical decompositions for a transseries:
o(A) := { T ∈ T : T ≺ A } , O(A) := { T ∈ T : T 4 A } .
These are used especially when A is a monomial, but o(A) = o(mag A). Conventionally,
we write T = U + o(A) when we mean T ∈ U + o(A) or T − U ≺ A.
Notation 1.5. For use with a finite ratio set µ ⊂ Gsmall , we define
oµ(A) := { T ∈ T : T ≺µ A } , Oµ(A) := { T ∈ T : T 4µ A } .
This time monomials do not suffice: if µ = {x−1 , e−x }, then oµ(x−1 + e−x ) 6= oµ(x−1 ).
Remark 1.6. Note the simple relationship between < and ≺: Define |T | = T if T ≥ 0,
|T | = −T if T < 0. Then
2
2 Well-Based Transseries
Besides the grid-based transseries as found in [8], we may also refer to the well-based
version as found, for example in [7] or [15].
Definition 2.1. For an ordered abelian group M, let R[[M]] be the set of Hahn series
with support which is well ordered (according to the reverse of ≻). Begin with group
W0 = { xa : a ∈ R } and field T0 = R[[W0 ]]. Assuming field TN = R[[WN ]] has been
defined, let n o
WN +1 = xb eL : L ∈ TN is purely large
Now as before,
A difference from the grid-based case: T•,• 6= R[[W•,• ]]. The domain of exp is T•,• and
not all of R[[W•,• ]].
Then T = T•,• is what I will mean here by “well based” transseries. This is the
system found in [7], for example. This system and others are explored in [15].
We have used letter Fraktur G (G) for “grid” and letter Fraktur W (W) for “well”.
Notation T is used for both, perhaps that will be confusing? It is intended that what
I say here can usually apply to either case.
Here is one of the results that the well-based theory depends on. (It is required,
for example, to show that T −1 has well-ordered support.) I am putting it here because
of its tricky proof. The result is attributed to Higman, with this proof due to Nash-
Williams.
Proposition 2.2. Let M be a totally ordered abelian group. Let B ⊆ Msmall be a set
of small elements. Write B∗ for the monoid generated by B. If B is well ordered (for
the reverse of ≻), then B∗ is also well ordered.
Proof. Write BS
n for the set of all products of n elements of B. Thus: B0 = {1},
B1 = B, B∗ = ∞ ∗
n=0 Bn . If g ∈ B , define the length of g as
l(g) = min { n : g ∈ Bn } .
3
Suppose (for purposes of contradiction) that there is an infinite strictly increasing
sequence in B∗ . Among all infinite strictly increasing sequences in B∗ , let l1 be the
minimum length of the first term. Choose n1 that has length l1 and is the first term of
an infinite strictly increasing sequence in B∗ . Recursively, suppose that finite sequence
n1 ≺ n2 ≺ · · · ≺ nk has been chosen so that it is the beginning of some infinite
strictly increasing sequence in B∗ . Among all infinite strictly increasing sequences in
B∗ beginning with n1 , · · · , nk , let lk+1 be the minimum length of the (k + 1)st term.
Choose nk+1 of length lk+1 such that there is an infinite strictly increasing sequence
in B∗ beginning n1 , · · · , nk , nk+1 . This completes a recursive definition of an infinite
strictly increasing sequence (nk ) in B∗ .
Now because all elements of B are small and this sequence is strictly increasing,
nk 6= 1. For each k, choose a way to write nk as a product of lk elements of B, then
let bk ∈ B be least of the factors. So nk = bk mk . Now (bk ) is an infinite sequence in
B, so there is a subsequence (bkj ) with bk1 < bk2 < · · · . So
nk j nk nk
mkj = ≺ j+1 4 j+1 = mkj+1
b kj b kj bkj+1
Wpure
0 = W0 , W−1 = {1}.
Of course the sets Wpure
N are subgroups of W• . Any g ∈ WN can be written
uniquely as g = ab with a ∈ WN −1 and b ∈ Wpure N . Group WN is the direct product
of subgroups:
WN = Wpure
0 · Wpure
1 · · · Wpure pure
N −1 · WN .
A set A ⊂ WN is decomposed as
A = { ab : b ∈ B, a ∈ Ab } , (∗)
a1 b1 ≺ a2 b2 ⇐⇒ b1 ≺ b2 or {b1 = b1 and a1 ≺ a2 }.
So the set A is well ordered if and only if set B and all sets Ab are well ordered.
The lexicographic ordering is the “height wins” rule:
4
Decomposition of Sets
I include here a few more uses of the decomposition (∗). Skip to Section 3 if you are
primarily interested in the grid-based version of the theory.
Write m† = m′ /m for the logarithmic derivative. In particular, if m = eL ∈ Wpure N ,
N ≥ 2, then m† = L′ is supported in Wlarge N −1 \ W N −2 , and if m = e L ∈ Wpure , then
1
m† = L′ is supported in W0 . P
The existence
P of the derivative for transseries is stated like this: If T = g∈A cgg,
then T ′ = g∈A cgg′ . Let us consider it more carefully.
P
Theorem 2.5. Let A ⊆ WN,M be well ordered, and let T = g∈A cgS g in TN,M have
support A. Then (i) the family { supp(g ′ ) : g ∈ A } is point-finite; (ii) ′
P g∈A supp(g ) is
well ordered; (iii) g∈A cgg′ exists in T•,• .
This is proved in stages.
P
Proposition 2.6. Let A ⊆ W0 be well ordered, and let T = Sg∈A cgg have support
A. Then (i) P the family { supp(g′ ) : g ∈ A } is point-finite; (ii) g∈A supp(g′ ) is well
ordered; (iii) g∈A cgg′ exists in T0 .
A = { ab : b ∈ B, a ∈ Ab } , (∗)
where B ⊂ Wpure N is well ordered, and for each b ∈ B, the set Ab ⊂ WN −1 is well
ordered. Now if g = ab ∈ A, b ∈ Wpure ′ ′ †
N , a ∈ WN −1 , then g = (a + ab )b and
supp(a′ + ab† ) ⊂ GN −1 .
(i) Let m ∈ W belong to some supp(g′ ). It could be that m ∈ supp(a′ )b, b ∈ B, a ∈
Ab; this happens for only one b and only finitely many a by the induction hypothesis.
Or it could be that m ∈ supp(ab† )b. This happens for only one b and (since both Ab
and supp b† are well ordered) only finitely many a. So, in all, m ∈ supp(g′ ) for only
finitely many g ∈ A.
(ii) For b ∈ B, let
[
Cb = Ab · supp b† ∪ supp(a′ ).
a∈Ab
So using the induction hypothesis and [8, Prop. 3.27], we conclude that Cb ⊂ WN −1 is
well ordered. Therefore
[
supp(g′ ) ⊆ { ab : b ∈ B, a ∈ Cb }
g∈A
5
is also well ordered since it is ordered lexicographically.
(iii) follows from (i) and (ii).
Proof of Theorem 2.5. Recall the notation lm = log ◦ log ◦ · · · ◦ log with m logarithms
(m > 0), l0 = x, l−m = exp ◦ exp ◦ · · · ◦ exp with m exponentials. Note for m ≥ 1,
l′m = 1/(xl1 l2 · · · lm−1 ) ∈ Wm−1,m−1 .
Define A1 = { g ◦ l−M : g ∈ A }. Then A1 is well ordered and A1 ⊆ WN . Thus the
previous result applies to A1 . Now for g ∈ A we have g = g1 ◦lM , g1 ∈ A1 , and g′ = (g′1 ◦
lM )·l′M . So supp(g′ ) = (supp(g′1 )◦lM )·l′M . Both correspondences (compose with lM and
multiply by l′M ) are bijective and order-preserving. So the S family { supp(g
′) : g ∈ A }
is point-finite ′ ′
since { supp(g1 ) : g1 ∈ A1 } is point-finite; g∈A supp(g ) is well-ordered
S
since g1 ∈A1 supp(g′1 ) is well-ordered. And supp(g′ ) ⊂ Wmax(N,M ),M , so T ′ ∈ T•,• .
Now we consider a set closed under derivative in a certain sense: a single well
ordered set that supports all derivatives of some T .
where B is well ordered and, for each eL ∈ B, the set AL ⊆ W0 is well ordered; the
ordering is lexicographic:
6
Proof. Write e = e0 e1 with e0 ∈ WN −1 , e1 ∈ Wpure N , e1 ≺ 1. Now for g = ab ∈ A, we
have
xeg′ = xe0 e1 (a′ b + ab′ ) = (xe0 a′ + xe0 ab† ) · (e1 b). (1)
with e1 b ∈ Wpure
N and support of the first factor in WN −1 . Applying this again:
(xe∂)2 g = xe0 (xe0 a′ + xe0 ab† )′ + xe0 (xe0 a′ + xe0 ab† )b† e1 b.
Continue many times: (xe∂)j g = V · ej1 b, supp V ⊂ WN −1 , every term in V has the
following form: some a ∈ Ab, or some derivative, up to order j, multiplied by factors
chosen from x, e1 , b† , e†1 , or derivatives of these, up to order j, each to a power at most
j. So there are finitely many well ordered sets involved.
Now let Be = B · {1, e1 , e2 , · · · }. Thus B ⊆ B e ⊂ Wpure and B e is well ordered. Fix
1 N
m ∈ B.e Because B is well-ordered and e1 ≺ 1, we have m = bej with b ∈ B for only
1
finitely many different values of j. For each such j we get a well ordered set in WN −1 .
Since there are finitely many j, in all we get a well ordered set, call it A e m. Our final
result is n o
e=
A e a∈A
am : m ∈ B, em ,
Proof. Let n be minimum such that A ⊂ Wn . If n = N , then this has been proved
in Proposition 2.10. In fact, if n < N the proof in Proposition 2.10 still works with
B = {1}. We proceed by induction on n. Assume n > N and the result is true for
smaller n. Decompose A as usual:
A = { ab : b ∈ B, a ∈ Ab } ,
where B ⊂ Wpure n is well ordered, and for each b ∈ B, the set Ab ⊂ Wn−1 is well
ordered. For g = ab ∈ A,
xeg′ = (xea′ + xeab† )b. (2)
∗
Now supp(xeb† ) is well ordered and 4 1, so the monoid supp(xeb† ) generated by
∗
it is well ordered, so Ab · supp(xeb† ) is well ordered. By the induction hypothesis,
there exists well ordered Ae b such that
∗
Ab · supp(xeb† ) ⊆ A e b ⊂ Wn−1 ,
e b then supp(xem′ ) ⊆ A
and if m ∈ A e b. Then define
n o
e = ab : b ∈ B, a ∈ A
A eb ,
e then supp(xeg′ ) ⊆ A.
which is again well ordered. From (2) we conclude: if g ∈ A, e
7
3 The Recursive Structure of the Transline
Proposition 3.1 (Inductive Principle). Let R ⊆ T. Assume:
(a) a ∈ R for all constants a ∈ R.
(b) x ∈ R.
(c) If A, B ∈ R, then AB ∈ R.
P
(d) If Ai ∈ R for all i in some index set, and Ai → 0, then Ai ∈ R.
(e) If A ∈ R, then eA ∈ R.
(f) If A ∈ R, then A ◦ log ∈ R.
Then R = T.
Proof. This principle is clear from the definition for T in [8] once we observe:
(i) x ◦ log = log(x), so log(x) ∈ R by (b) and (f). (ii) If b ∈ R, then b log(x) ∈ R by (a)
and (c). (iii) eb log(x) = xb , so xb ∈ R by (e). (iv) Once the terms
P of a purely large L
are known to be in R, we get monomial xb eL ∈ R. (v) If T = cj gj and monomials
gj ∈ R, then T ∈ R. (vi) If T ∈ R, then T ◦ logM ∈ R.
In fact, the set of conditions can be reduced:
Corollary 3.2. Let R ⊆ T = R G , and identify G as a subset of T as usual.
Assume:
(d′ ) If supp A ⊆ R, then A ∈ R.
(e′ ) If b ∈ R and L ∈ R is purely large and log-free, then xb eL ∈ R.
(f ′ ) If g ∈ R is a monomial, then g ◦ log ∈ R.
Then R = T.
Proof. Since supp 0 = ∅, we get 0 ∈ R by (d′ ); but 0 is purely large and log-free, so
1, x ∈ R by (e′ ). Follow the construction in [8].
Another inductive form (see [13]):
Corollary 3.3. Let R ⊆ T = R G , and identify G as a subset of T as usual.
Assume:
(b′′ ) For all n ∈ N, ln ∈ R.
(d′′ ) If supp A ⊆ R, then A ∈ R.
(e′′ ) If L ∈ R is purely large, then eL ∈ R.
Then R = T.
Proof. First, log x ∈ R by (b′′ ). For any b ∈ R, b log x is purely large, so eb log x = xb ∈ R
by (e′′ ). Next, T0 ⊆ R and b log x + L ∈ R for any purely large L ∈ T0 by (d′′ ), so
eb log x+L = xb eL ∈ R. Thus G1 ⊆ R so T1 ⊆ R. Continuing inductively, Gn , Tn ⊆ R
for all n ∈ N. So T• ⊆ R.
Note that Re := { T ∈ T : T ◦ log ∈ R } also satisfies the three conditions, so by the
preceding paragraph T• ⊆ R, e and T•1 ⊆ R. Continuing inductively, T•m ⊆ R for all
m ∈ N. So T•• ⊆ R and R = T.
Question 3.4. Is there a good recursive formulation for P or S? See 4.1.
8
The Schmeling Tree of a Transmonomial
Let g be a transmonomial, g ∈ G. Then g = eL , where L ∈ T is purely large. P So
L = c0 g0 + c1 g1 + · · · where ci ∈ R and gi ∈ Glarge . We may index this as L = i ci gi ,
where i runs over some ordinal (an ordinal < ω ω for the grid-based case; just countable
for the well-based case; possibly finite; possibly just a single term; or even no terms at
all if g = 1). P
In turn, each gi = eLi , where Li ∈ T is purely large and positive. So Li = j cij gij ,
where index j runs over some ordinal (possibly a differentPordinal for different i).
Continuing, each gij = eLij , where Lij ∈ T, and Lij = k cijk gijk where gijk ∈
G. And soPon: each gi1 i2 ...is is in Glarge , and has the form gi1 i2 ···is = eLi1 i2 ···is , and
Li1 i2 ···is = j ci1 i2 ···is j gi1 i2 ···is j .
Say the original monomial g has height N ; that is, in the terminology of [8], g ∈
GN,• . Then eventually (with s ≤ N ) we reach gi1 i2 ···is = (lm )b for some m, and if b 6= 1,
then in one more step we get gi1 i2 ···is+1 = lm+1 . Let us stop a “branch” i1 , i2 , · · · when
we reach some lm (even if m ≤ 0 so that we have x or expn x).
The structure of the monomial g then corresponds to a Schmeling tree. (We have
adapted this tree discription from Schmeling’s thesis [17].) Each node corresponds to
some monomial. The root corresponds to g. The children of g are the gi . A leaf
corresponds to some logm x, and is labeled by the integer m. Each node that is not
a leaf has countably many children, arranged in an ordinal, and each edge is labeled
by a real number. All nodes gi1 i2 ···is in the tree (except possibly the root g) are large
monomials.
Example 3.5. Consider the following example. The ordinals here are all finite, so that
everything can be written down.
4 4 2
−e4e2x − x − (2/3)ex + 3eπex − 2x + log x
g=e
4
= exp − exp 4 exp 2x − x − (2/3) exp x
!
4 2
+ 3 exp π exp x − 2x + log x .
9
Figure 1: The Schmeling tree corresponding to monomial g
root to leaf) has N edges; and that g has tree-depth M iff M is the largest label on a
leaf. So the example in Figure 1 has tree-height 4 and tree-depth 1. These definitions
are convenient for analysis of such a tree diagram. They may differ from the notions
of “height” and “depth” defined in [8]. If g has height N (that is, g ∈ GN • ), then g
has tree-height at most N + 1. But it may be much smaller; for example,
e x
g = ee + x
has tree-height 1 but height 3. If g has depth M (that is, g ∈ G•M ), then g has
tree-depth M or M + 1, at least if we have allowed negative values of M . The same
example g has depth 0 and tree-depth 0, but
e x2
g = ee + x
where s is chosen so that gi1 i2 ···is = logm x, and of course (logm x)′ is itself a monomial.
(The monomials gi1 , · · · , gi1 i2 ···is−1 are large, but if m > 0, then the monomial (logm x)′
10
is small.) So there is one term of g′ for each branch (from root to leaf) of the tree. In
the derivative g′ , the coefficient for monomial (1) is
ci1 ci1 i2 · · · ci1 i2 ···is ,
the product of all the edge-labels on the corresponding branch.
Example 3.6. Following the tree in the example (Figure 1), we may write the derivative
g′ with one term for each of the six branches of the tree:
g′ = (−1) · 4 · 2 · 4 · gg0 g00 g000 · (log x)′
+ (−1) · 4 · (−1) · gg0 g00 · x′
+ (−1) · (−2/3) · gg0 · (exp x)′
+ 3 · π · 1 · 4 · gg1 g10 g100 · (log x)′
+ 3 · π · (−2) · 2 · gg1 g10 g101 · (log x)′
+ 3 · 1 · gg1 · (log x)′ .
The monomial (1) without the first factor g is an element of the set lsupp(g). The
magnitude of g′ is the monomial we get following the left-most branch
g g0 g00 · · · g00···0 (logm x)′ ,
since all other branches are far smaller.
In the special case where the tree-depth of g is ≤ 0, and we extend all branches so
that all leaves are x, the monomials in g′ are
g gi1 gi1 i2 · · · gi1 i2 ···is−1 (0)
where s is chosen with gi1 i2 ···is = x. In this case, all monomials gi1 · · · gi1 i2 ···is−1 in
lsupp g are large, and we have
m := max lsupp g = g0 g00 · · · g00···0 = mag(g′ /g).
Then g′ ∼ gm, and we get g(n) ∼ gmn for all n ∈ N by induction using m2 ≻ m′ [8,
Prop. 3.82(iv)]. (This may not hold when g has positive tree-depth.)
Proposition 3.7. Let T, V ∈ T. Assume all monomials in T have tree-depth ≤ 0, and
V ≺ 1/m where m = max lsupp T . Then
T (n) V n , n∈N
is point-finite, so the series
∞
X Vn
T (n) (x)
n!
n=0
converges in the asymptotic topology.
Proof. Fix finite set µ ⊂ Gsmall so that all far-smaller inequalities are witnessed by
µ: in particular, V ≺µ 1/m and T = dom(T ) · (1 + S) with S ≺µ 1. Note that
T (n+1) ∼ mT (n) . Then
T ≻µ T ′ V ≻µ T ′′ V 2 ≻µ . . . ,
P (n)
so by [8, Prop. 4.17] the series T (x)V n /n! is point-finite.
Remark 3.8. The same result should be true for other T , perhaps using tsupp not
lsupp; see [9, Def. 7.1].
11
4 Properties of Composition
Composition T ◦ S is defined when T, S ∈ T and S is large and positive. As usual we
will write T = T (x) and T ◦ S = T (S).
Notation 4.1. Write P for the group of large positive transseries. And S for the subgroup
S = x + o(x) = { T ∈ T : dom T = x } = { T ∈ T : T ∼ x }. For now, think of P and
S as sets. They are closed under composition. For existence of inverses: well-based,
Proposition 4.20; grid-based, [9, Sec. 8].
Many basic properties of composition may be proved by applying an inductive prin-
ciple such as Proposition 3.1 to the left composand T . (I may—perhaps misleadingly—
call this “induction on the height”.) Here are some examples.
Proposition 4.2. Let T, T1 , T2 ∈ T, S ∈ P. Then
T > 0 =⇒ T ◦ S > 0,
T = 0 =⇒ T ◦ S = 0,
T < 0 =⇒ T ◦ S < 0,
T1 < T2 =⇒ T1 ◦ S < T2 ◦ S,
T1 = T2 =⇒ T1 ◦ S = T2 ◦ S,
T1 > T2 =⇒ T1 ◦ S > T2 ◦ S,
T ≺ 1 =⇒ T ◦ S ≺ 1,
T ≻ 1 =⇒ T ◦ S ≻ 1,
T ≍ 1 =⇒ T ◦ S ≍ 1,
T ∼ 1 =⇒ T ◦ S ∼ 1,
T1 ≺ T2 =⇒ T1 ◦ S ≺ T2 ◦ S,
T1 ≻ T2 =⇒ T1 ◦ S ≻ T2 ◦ S,
T1 ≍ T2 =⇒ T1 ◦ S ≍ T2 ◦ S,
T1 ∼ T2 =⇒ T1 ◦ S ∼ T2 ◦ S,
T ◦ S ≍ mag(T ◦ S) = mag((mag T ) ◦ S) ≍ (mag T ) ◦ S,
T ◦ S ∼ dom(T ◦ S) = dom((dom T ) ◦ S) ∼ (dom T ) ◦ S.
Some corresponding things may fail for the other composand: Let T = ex , S1 =
x + log x, and S2 = x. Then S1 ≍ S2 but T ◦ S1 ≻ T ◦ S2 ; dom(T ◦ S1 ) 6≍ T ◦ dom S1 .
Proposition 4.3. Let S1 , S2 ∈ P, S1 < S2 .
(a) if c ∈ R, c > 0, then S1c < S2c ,
(b) if c ∈ R, c < 0, then S1c > S2c ,
(c) log(S1 ) < log(S2 ).
(d) exp(S1 ) < exp(S2 ),
Proof. (a) Write the canonical multiplicative decomposition S1 = a1 eL1 (1 + U1 ) as in
1.3, and similarly S2 = a2 eL2 (1 + U2 ). Then
X∞ ∞
X
S1c = ac1 ecL1 1 + cU1 + cj U1j , S2c = ac2 ecL2 1 + cU2 + cj U2j , (1)
j=2 j=2
12
for certain (binomial) coefficients cj . Now for S1 < S2 there are these cases: (i) L1 <
L2 ; (ii) L1 = L2 , a1 < a2 ; (iii) L1 = L2 , a1 = a2 , U1 < U2 . But in each of these cases,
applying equations (1) shows S1c < S2c . For case (iii):
∞
X
S2c − S1c = ac1 ecL1 (U2 − U1 ) c + cj (U2j−1 + U2j−2 U1 + · · · + U1j−1 ) > 0
j=2
P
since the terms in the are all ≺ 1.
(b) is similar.
(c) Write canonical multiplicative decomposition S1 = a1 eL1 (1 + U1 ) as in 1.3 and
similarly S2 = a2 eL2 (1 + U2 ). Then
∞
X
log(S1 ) = log(a1 ) + L1 + U1 + cj U1j ,
j=2
∞
X
log(S2 ) = log(a2 ) + L2 + U2 + cj U2j ,
j=2
for certain coefficients cj . The same cases (i)—(iii) may be used, and in each case we
get log(S1 ) < log(S2 ). Case (iii) has reasoning as we did before for (a).
(d) For this, write the canonical additive decomposition S1 = L1 + c1 + U1 as in
1.2, and similarly S2 = L2 + c2 + U2 . Then
∞
X X∞
eS1 = ec1 eL1 1 + U1 + cj U1j , eS2 = ec2 eL2 1 + U2 + cj U2j ,
j=2 j=2
for certain coefficients cj . For S1 < S2 there are three cases: (i) L1 < L2 ; (ii) L1 =
L2 , c1 < c2 ; (iii) L1 = L2 , c1 = c2 , U1 < U2 . In all three cases we get eS1 < eS2 .
Proof. First note: If L is purely large and positive, then eL ≻ L. First use [8,
Prop. 3.72] for log-free L. Then Proposition 4.2 to compose with logM on the inside.
It follows that: If T ≻ 1 and T > 0, then eT ≻ T .
(a) Write A = log(T ) − T + 1; I must show A < 0. Write canonical
P∞multiplicative
decomposition T = ae (1 + U ) as in 1.3. Then log(T ) = log(a) + L − j=1 (−1)j U j /j.
L
13
So assume L = 0. If c 6= 0, then A ∼ ec − c − 1, which is > 0 by the ordinary real
Taylor theorem. So assume c = 0. Then if V 6= 0 we have
∞
X Vj V2
A= −V −1= + o(V 2 ) > 0.
j! 2
j=0
Exponentiality
Associated to each large positive transseries is an integer known as its “exponentiality”
[13, Exercise 4.10]. If you compose with log sufficiently many times on the left, the
magnitude is a leaf lm . The number p in the following result is the exponentiality of
Q, written p = expo Q.
Proof. We will use the basic definition for logarithms. Let A = ceL (1+U ) be the canon-
ical multiplicative decomposition. P If A ∈ P, this means c > 0 and L is purely large and
positive. Then log A = L + log c + ∞ j=1 ((−1)
j+1 /j)U j . From this we get: If A, B ∈ P,
so expo T = 1.
Proposition 4.8. If expo T = 0, then logk ◦ T ◦ expk is log-free for k large enough.
14
Simpler Proof Needed
Here is a simple fact. It needs a simple proof. It is true for functions, so it is surely
true for transseries as well. My overly-involved proof will be given in Section 8. In
fact, there are two propositions. Each can be deduced from the other:
T ′ > 0 =⇒ T ◦ S1 < T ◦ S2 ,
T ′ = 0 =⇒ T ◦ S1 = T ◦ S2 , (1)
′
T < 0 =⇒ T ◦ S1 > T ◦ S2 .
A ◦ S2 − A ◦ S1 ≺ B ◦ S2 − B ◦ S1 . (2)
Proof of 4.10 from 4.9. Since the theorem is unchanged when we replace B by −B,
we may assume B ′ > 0. We have A′ ≺ B ′ . Let c ∈ R. By Remark 1.6, B ′ > cA′ so
(B − cA)′ > 0. Therefore, by Proposition 4.9, (B − cA) ◦ S1 < (B − cA) ◦ S2 so
B ◦ S2 − B ◦ S1 > c A ◦ S2 − A ◦ S1 .
Proof of 4.9 from 4.10. Let R be the set of all T ∈ T that satisfy (1) for all S1 , S2 ∈ P
with S1 < S2 . We claim QR satisfies the conditions of Corollary 3.3. Clearly 1, x ∈ R.
(b′′ ) Note l′m = 1/ m−1
j=0 lj > 0. If S1 < S2 , then by Proposition 4.3(c) we have
logm S1 < logm S2 .
(d′′ ) Assume supp T ⊆ R. If T = 0, the conclusion is clear. Assume T 6= 0. Let
ag = dom T , a ∈ R, g ∈ G. We may assume g 6= 1, since if g = 1, we may consider
T − ag instead. So T ′ ∼ ag′ . Write A = T − ag so that T = ag + A with A ≺ ag.
There will be cases based on the signs of a and g′ . Take the case a > 0, g′ > 0. So
g ◦ S1 < g ◦ S2 since g ∈ R. Now by Proposition 4.10,
ag ◦ S2 − ag ◦ S1 ≻ A ◦ S2 − A ◦ S1 ,
Remark 4.11. To prove either 4.10 or 4.9 outright seems to require more work than
the proofs found above. See Theorem 8.14.
Here is a special case of Proposition 4.10.
15
Grid-Based Version
As we know, T ≺ S if and only if T ≺µ S for some finite set µ ⊂ Gsmall of generators.
So of course Proposition 4.12 needs a form in terms of ratio sets. It is found in [9,
Rem. 9.3]:
Proposition 4.13. Let µ be a ratio set. Let S1 , S2 ∈ P. Then there is a ratio set α
such that: For every A ∈ Tµ, if A ≺µ x, then A(S2 ) − A(S1 ) ≺α S2 − S1 .
Note that α depends on S1 and S2 , not just on a ratio set generating them. It is
apparently not possible to avoid this problem:
Question 4.14. Given a ratio set µ ⊂ Gsmall , is there α ⊇ µ such that: if A, S1 , S2 ∈ Tµ,
A ≺µ x, S1 , S2 ∈ P, and S1 < S2 , then A ◦ S2 − A ◦ S1 ≺α S2 − S1 ?
3 3
Example 4.15. Let µ = {x−1 , e−x }. Consider A = µ2 = e−x and Sa = µ−1
1 + aµ1 =
−1
x + ax for a ∈ R. Certainly A ≺ 1. Compute
µ
3
The dominant term is the monomial e−x −3ax . As a ranges over R, these monomials
do not lie in any grid. Nor even in any well ordered set.
3 3
Now if a < b, then Sa < Sb and e−x −3ax ≻ e−x −3bx , so
3 −3ax
Sb − Sa = (b − a)x−1 , A ◦ Sb − A ◦ Sa ∼ −e−x .
Integral Notation
R
Notation 4.16. If A, B ∈ T and A′ = B, we may sometimes write A = B, but in
fact A is only determined by BRup to a constant summand. The large part of A is
S
determined by B. We also write S12 B := A(S2 )− A(S1 ), which is uniquely determined
by B, and is defined for S1 , S2 ∈ P, S1 < S2 .
Of course, with this definition, any statement about integrals is equivalent to a
statement about derivatives. Propositions 4.9 or 4.10 lead to the following.
Corollary 4.17. Let A, B ∈ T, S1 , S2 ∈ P, S1 < S2 . Then
Z S2
B > 0 =⇒ B > 0,
S1
Z S2
B = 0 =⇒ B = 0,
S1
Z S2
B < 0 =⇒ B < 0.
S1
Z S2 Z S2
A > B =⇒ A> B,
S1 S1
16
Z S2 Z S2
A = B =⇒ A= B,
S1 S1
Z S2 Z S2
A < B =⇒ A< B.
S1 S1
Remark 1.6 lets us prove formulas about ≺ from formulas about <. Here are some
examples.
Proposition 4.18. If A, B ∈ T, A, B nonzero, S1 , S2 ∈ P, S1 < S2 , then
Z S2 Z S2
A ≻ B =⇒ A≻ B,
S1 S1
Z S2 Z S2
A ≺ B =⇒ A≺ B,
S1 S1
Z S2 Z S2
A ≍ B =⇒ A≍ B,
S1 S1
Z S2 Z S2
A ∼ B =⇒ A∼ B.
S1 S1
Compositional Inverse
Now using Proposition 4.12 we get a nice proof for the existence of inverses under
composition. (For the well-based case.) See also [7, Cor. 6.25].
Proposition 4.19. Let T = x + A, A ≺ x, supp A ⊂ GN . Then T has an inverse S
under composition, S = x + B, B ≺ x, supp B ⊂ GN .
Proof. Let the function Φ be defined by Φ(S) = x − A ◦ S. Then Φ maps A :=
{ x + B : B ≺ x, supp B ⊆ GN } into itself [8, Prop. 3.98]. I claim Φ is contracting on
A. Indeed, if S1 , S2 ∈ S and S1 6= S2 , then
Φ(S2 ) − Φ(S1 ) = A ◦ S1 − A ◦ S2 ≺ S2 − S1
by Proposition 4.12.
Apply the fixed-point theorem [12, Thm. 4.7] (see Proposition 6.4, below) to get S
with S = Φ(S). Then
T ◦ S = S + A ◦ S = Φ(S) + A ◦ S = x.
As is well-known: if right inverses all exist, then they are full inverses. Review of
the proof: Suppose T ◦ S = x as found. Start with S and get a right-inverse T1 so
S ◦ T1 = x. Then T = T ◦ x = T ◦ (S ◦ T1 ) = (T ◦ S) ◦ T1 = x ◦ T1 = T1 .
Proposition 4.20. The set P is a group under composition.
Proof. Let T ∈ P. Let p = expo T , so that logk ◦T ◦ expk ∼ expp for large enough k.
Let T1 = logk ◦T ◦ expk−p , so that T1 ∼ x and (if k is large enough) T1 is log-free. By
Proposition 4.19 there is an inverse, say T1 ◦ S1 = x. Write S = expk−p S1 ◦ logk . Then
T ◦ S = expk ◦T1 ◦ logk−p ◦ expk−p ◦S1 ◦ logk = x.
Remark 4.21. We need a grid-based version of Proposition 4.12 to prove existence of a
grid-based compositional inverse using a grid-based fixed-point theorem. This is done
in [9, Sec. 8].
17
An Example Inverse
Consider the transseries S = log x + 1 + x−1 ∈ P. We want to discuss its compositional
inverse. According to the method above, we should compute the inverse of S1 =
[−1]
S ◦ exp = x + 1 + e−x ∈ P. And if T1 = S1 , then S [−1] = exp ◦T1 .
For the inverse of S1 = x + 1 + e , write A = 1 + e−x and solve by iteration
−x
either by iteration, or with a linear equation for each aj in terms of the previous ones.
(And aj is rational times ej .) And then
Compositional Equations
Because of the group property Proposition 4.20 (or the grid-based version [9, Sec. 8]),
we know: Let S, T ∈ T. If S, T are both large and positive, then there is a unique
Y ∈ P with S = T ◦ Y .
Proof. (a) is from Proposition 4.20. (b) Apply (a) to 1/S and 1/T . (c) Apply (a) to
−S and −T . (d) Apply (b) to −S and −T . (e) Apply (b) to S − c and T − c. (f) Apply
(d) to S − c and T − c.
The concluding cases are clear.
18
Mean Value Theorem
Using Proposition 4.9, we get a MVT.
Proposition 4.23. Given A ∈ T, S1 , S2 ∈ P, S1 < S2 , there is S ∈ P so that
A ◦ S2 − A ◦ S1
= A′ ◦ S.
S2 − S1
Proof. Write B = (A ◦ S2 − A ◦ S1 )/(S2 − S1 ). We claim that Proposition 4.22 shows
that there is a solution S to B = A′ ◦ S. So we have to show that A′ , B are in the same
case of Proposition 4.22.
Let c ∈ R. If A′ > c, then (A − cx)′ > 0, and therefore by Proposition 4.9
(A−cx)◦S1 < (A−cx)◦S2 , so A◦S2 −A◦S1 > c(S2 −S1 ), so (A◦S2 −A◦S1 )/(S2 −S1 ) > c,
so B > c. Similarly: if A′ < c, then B < c. These hold for all real c, so in fact A′ and
B are in the same case.
The following proposition, too, has—so far—only an involved proof, which will not
be given here. See Section 5 for this and still more versions of the Mean Value Theorem.
19
(g) The only case left is T = a for some a ∈ R, so T (A) = T (B) = a = K, and this
case was taken care of at the beginning of the proof. Or let S = (A + B)/2 to get S
strictly between A and B when A 6= B.
Remark 4.27. Using 4.26 we can deduce 4.25 from 4.24 without the need of 4.23. But
4.24 is still the difficult step.
5 Taylor’s Theorem
Here we will formulate many versions of Taylor’s Theorem. Unfortunately, proofs are
(as far as I know) still quite involved. Proofs (for most cases) will not be included here.
See [7, §6] for well-based transseries and [13, §5.3] for grid-based transseries. But in
some cases it may not be clear that they have proved everything listed here.
Recall definitions GN , GN,M , G• , etc. If A is a set of monomials, and S ∈ P, write
A ◦ S := { g ◦ S : g ∈ A }. Let U ∈ T, then we say U ≺ A if U ≺ g for all g ∈ A. Recall
that if g ∈ GN,M \ GN −1,M and g ≺ 1, then g ≺ GN −1,M .
Let T ∈ T, S1 , S2 ∈ P. For n ∈ N define
n−1
X T (k) (S1 )
∆n (T, S1 , S2 ) := T (S2 ) − (S2 − S1 )k .
k!
k=0
∆0 (T ) = T (S2 ),
∆1 (T ) = T (S2 ) − T (S1 ),
∆2 (T ) = T (S2 ) − T (S1 ) − T ′ (S1 ) · (S2 − S1 ),
1
∆3 (T ) = T (S2 ) − T (S1 ) − T ′ (S1 ) · (S2 − S1 ) − T ′′ (S1 ) · (S2 − S1 )2 .
2
T (n) (S1 )
∆n (T ) ∼ (S2 − S1 )n .
n!
20
[Bn ] Let T ∈ T, let S1 , S2 ∈ P, and let n ∈ N. If T (n+1) > 0 and S1 < S2 , then
21
Let U, V ∈ T, S1 , S2 ∈ P. If U ′ > 0, V > 0, S1 < S2 , then
Z S2 Z S2 Z S2
U (S1 ) V < U V < U (S2 ) V.
S1 S1 S1
22
Definition 6.3. Here is a similar convergence, applying to well-based transseries, but
which makes sense even for grid-based transseries.
A
Let A ⊆ G be well ordered. Tj −→ T iff supp(Tj ) ⊆ A for all j and supp(Tj − T )
is point-finite;
A
Tj −→
W
T iff there exists well ordered A ⊆ G with Tj −→ T .
Basics
The attractive topology is discrete on TN M = R GN M , the transseries of given
height and depth. Indeed, ifT ∈ TN M , then for n > N the set T + o(1/ expn ) is open
and TN M ∩ T + o(1/ expn ) = {T }. So a net contained in some TN M converges P∞ −jiff it
is eventually constant. The series representing T ∈ T (for example series j=0 x ) is
essentially never H-convergent—it is H-convergent only if it has all but finitely many
terms equal to 0.
For each m, the “coefficient” map T 7→ T [m] is continuous from (T, asymptotic) to
(R, discrete). Indeed, given m and T0 ∈ T, the function T [m] is constant on the coset
T0 + o(m). So it is better than continuous: it is locally constant.
The series representing T ∈ T is C-convergent to T . And W-convergent. Consider
the sequence x− log j , (j = 1, 2, · · · ). This set is well ordered but not grid-based. So
x− log j −→
W
0 but not x− log j −→
C
0.
Coefficient maps T [m] are C-continuous and W-continuous. I guess locally constant,
too, since sets of the form { T ∈ T : T [m] = a } are C-open and W-open.
The whole transline T is not metrizable for C or W. Let Tjk = x−j ekx . Then
according to C convergence,
lim Tjk k = 0.
k→∞
(For example, for each k choose jk so that the distance from Tjk k to 0 is < 1/k.) But
that is false for C or W.
23
A pseudo limit is not expected to be unique, but in our setting there is a distin-
guished pseudo limit. It is the limit (in the W topology) of Sβ , where Sβ is the longest
common truncation of { Tα : α ≥ β }. See the “stationary limit” in [12].
Here is a well-based fixed point theorem from van der Hoeven [12, Thm. 4.7]. Note
that in our case where M is totally ordered, the special ordering ≺· coincides with
the usual ordering ≺ .
Proposition 6.4. Let Φ : R[[M]] → R[[M]]. Assume for all T1 , T2 ∈ R[[M]], if T1 6= T2 ,
then Φ(T1 ) − Φ(T2 ) ≺ T1 − T2 . Then there is a unique S ∈ R[[M]] such that Φ(S) = S.
Proof. Uniqueness. Assume Φ(S1 ) = S1 and Φ(S2 ) = S2 . If S1 6= S2 , then Φ(S1 ) −
Φ(S2 ) = S1 − S2 6≺ S1 − S2 , a contradiction. So S1 = S2 .
Existence (outline). Choose any nonzero T0 ∈ R[[M]]. For ordinals α we define Tα
recursively. Assume Tα has been defined. Consdier two cases. If Φ(Tα ) = Tα , then
S = Tα is the required result. Otherwise, let Tα+1 = Φ(Tα ). If λ is a limit ordinal,
and Tα has been defined for all α < λ, then (recursively) Tα is pseudo Cauchy, so let
Tλ be a pseudo limit of (Tα )α<λ . Eventually the process must end because there are
more ordinals than elements of R[[M]].
Example. Consider Q = x+log x+log2 x+log3 x+· · · . The partial sums constitute
a pseudo Cauchy sequence in T, but the pseudo limits (such as Q itself) in R[[G]] are
not in T. This Q is the solution of Φ(Y ) = Y where Φ(Y ) = x + (Y ◦ log) is contracting
on R[[G]].
Addition
Addition (S, T ) 7→ S + T is H-continuous. Given m ∈ G, we have
S + o(m) + T + o(m) ⊆ S + T + o(m).
Multiplication
Multiplication (S, T ) 7→ ST is H-continuous. We have
S + o(m) T + o(n) ⊆ ST + o (mag S)n + (mag T )m + mn ,
so given S, T ∈ T and g ∈ G, there exist m, n ∈ G with S + o(m) T + o(n) ⊆
ST + o(g).
Multiplication is C-continuous [8, Prop. 3.48]. Let Si −→
C
S, Ti −→
C
T . There exist
µ,m µ,m e
µ, m so that Si −→ S and Ti −→ T . Then there exist µ e, m
e with Jµ,m · Jµ,m ⊆ Jµ̃,m .
e
(In fact we may take µ e = µ and m e = 2m.) Now given any g ∈ J µ̃, m , there are finitely
many pairs (m, n) ∈ Jµ,m × Jµ,m with mn = g. For each such m or n, except for finitely
many indices i we have Si [m] = S[m] and Ti [n] = T [n]. So, except for i in a finite union
of finite sets we have (Si Ti )[g] = (ST )[g]. Therefore Si Ti −→
C
ST .
24
Multiplication is W-continuous. This will be similar to C-continuity. We need to
use [8, Prop. 3.27]: Given any well ordered A ⊆ G, the set A · A is well ordered, and
for any g ∈ A · A, there are finitely many pairs (m, n) ∈ A × A with mn = g.
Differentiation
First note ′
T + o(n) ⊆ T ′ + o(n′ ) provided n 6= 1.
Given any m ∈ G, there is S ∈ T with S ′ = m by [8, Prop. 4.29]. We may assume the
constant term of S is zero. So let n = mag(S), and then n′ ∼ S ′ = m so
′
T + o(n) ⊆ T ′ + o(m).
In fact, since n did not depend on T , we have shown that differentiation is H-uniformly
continuous.
Now consider C-continuity.
From [8, Prop. 3.76] or [9, Prop. 4.7]: Given µ, m, there exist µ e, m
e so that if
µ,m e
µ̃,m
T ∈ Tµ,m then T ′ ∈ Tµ̃,m
e
and if Tj ∈ Tµ,m with Tj −→ T , then Tj′ −→ T ′ .
W-continuity probably needs a proof like [8, Prop. 3.76].
The derivative is computed as H-limit: From 5.1[A2 ] we have: for U ≺ GN −1,M ◦ S,
T (S + U ) − T (S) T ′′ (S)U
− T ′ (S) ∼ ,
U 2
so in the H-topology
T (S + U ) − T (S)
T ′ (S) = lim .
U →0 U
Integration
Integration is continuous? This should be investigated.
Composition (Left)
For a fixed (large positive) S, consider the composition function T 7→ T ◦ S.
If Ti −→
C
T , then Ti ◦ S −→
C
T ◦ S [8, Prop. 3.99], which depends on [8, Prop. 3.95].
For W-continuity we need a proof like [8, Prop. 3.95].
Now consider H-continuity. Note
T + o(n) ◦ S ⊆ (T ◦ S) + o(n ◦ S).
25
Composition (Right)
What about continuity of composition T ◦S as a function of the right composand S? It
is certainly false for C and W convergence. Indeed, let T = ex . Then to compute even
one term of eS we need to know all of the large terms of S; there could be infinitely
many large terms.
Now consider H-continuity.
Proposition 6.5. (i) Function exp is H-continuous on T. (ii) Function log is H-
continuous on (the positive subset of ) T. (iii) Let T ∈ T. Then function S 7→ T ◦ S is
H-continuous on P.
Proof. (i) Let S0 ∈ T and m ∈ G be given. Let
(
m mag(e−S0 ), if m mag(e−S0 ) 4 1,
n=
1, otherwise.
Now if s := S − S0 ≺ n, we have s ≺ 1 so es − 1 ∼ s ≺ n. And
eS − eS0 = eS0 (eS−S0 − 1) ≺ eS0 n 4 m.
That is: if S ∈ S0 + o(n), then eS ∈ eS0 + o(m). This shows that exp is H-continuous
at S0 .
(ii) Let S0 > 0 and m ∈ G be given. Then take
(
m mag S0 , if m 4 1,
n=
mag S0 , otherwise.
Now assume S − S0 ≺ n. Then
S − S0 n
≺ 41
S0 mag S0
so
S S − S0 S − S0 n
log(S) − log(S0 ) = log = log 1 + ∼ ≺ 4 m.
S0 S0 S0 mag S0
(iii) We will apply Corollary 3.2. Let R be the set of all T ∈ T such that the
function S 7→ T ◦ S is H-continuous. We now check the conditions of Corollary 3.2. If
g ∈ R, then g ◦ log ∈ R by (ii); this proves (f ′ ). If L ∈ R, then eL ∈ R by (i). And
xb = eb log x ∈ R by (i) and (ii). So xb eL ∈ R. This proves (e′ ).
Finally we must prove (d′ ). Let T ∈ T and assume supp T ⊆ R. (If T = 0 we have
T ∈ R trivially, so assume T 6= 0.) Let g0 = mag T , so g0 ∈ R. Note that T /g0 ≍ 1 ≺ x.
By Proposition 4.12 we have
T T
◦ S2 − ◦ S1 ≺ S2 − S1 ,
g0 g0
so S 7→ (T /g0 ) ◦ S is (uniformly) H-continuous. By hypothesis, S 7→ g0 ◦ S is H-
continuous. So (since multiplication is H-continuous) it follows that the product
T
S 7→ ◦ S · g0 ◦ S = T ◦ S
g0
is H-continuous.
So we may conclude R = T as required.
26
Fixed Point
Fixed point with parameter: conditions on Φ(S, T ) beyond “contractive in S for each
T ” so that if S = ST solves S = Φ(S, T ), then T 7→ ST is a continuous function of T .
Compare [12]. This should be investigated for all three topologies.
Proof. For N, M ∈ N, let A(N, M ) mean that the statement of the theorem holds for
all T ∈ GN,M , and let B(N, M ) mean that the the statement of the theorem holds for
all T ∈ TN,M . Note for any N, M ∈ N, from U ≺ GN,M ◦ S it follows that U ≺ S:
Indeed, 1 ∈ GN,M , so U ≺ 1 ≺ S.
(1) Claim: Let S ∈ P, U ∈ T, and assume U ≺ S. Then
U
log(S + U ) − log(S) ∼ . († log)
S
Indeed, U/S ≺ 1, so by the Maclaurin series for log(1 + z) we get
U U
log(S + U ) = log S 1 + = log(S) + log 1 +
S S
∞
X (−1)j U j
U U
= log(S) − = log(S) + + o .
j S S S
j=1
(S + U )b − S b ∼ bS b−1 · U. (†G0 )
Note that even if b = 0 the equation (S + U )b = S b + bS b−1 U + o(S b−1 U ) remains true.
(3) B(0, 0): Let T ∈ T0 , T 6∈ R, S ∈ P, U ∈ T, and assume U ≺ S. Then (†).
Let dom T = a0 xb0 . First consider the case b0 6= 0. Then T ′ ∼ a0 b0 xb0 −1 and
27
For any other term axb of T , we have b < b0 and
U
U1 := log(S + U ) − log(S) ∼ ≺ 1 ≺ log S.
S
Now applying B(0, M ) to T1 , S1 = log S, U1 , we get
U
U1 := log(S + U ) − log(S) ∼ ≺ 1.
S
28
Now if we write S1 = log S, then
U
U1 ∼ ≺ U ≺ GN −1,M +1 ◦ S = GN −1,M ◦ S1 .
S
Applying B(N, M ) to T1 , S1 , U1 , we get
The other cases 5.1[An ] and [A∞ ] would be proved in the same way. See [7,
Sect. 6.8], [13, Prop. 5.11]. The argument will perhaps use the formula for the jth
derivative of a composite function.
The condition U ≺ GN −1,M ◦ S comes from [7, Sect. 6.8]. In [13, Prop. 5.11] we can
see that in fact we do not need to use all of GN −1,M ; in the notation of [9, Def. 7.1], it
suffices that U ≺ (1/m) ◦ S for all m ∈ tsupp T .
T ′ > 0 =⇒ T ◦ S1 < T ◦ S2 ,
T ′ = 0 =⇒ T ◦ S1 = T ◦ S2 ,
T ′ < 0 =⇒ T ◦ S1 > T ◦ S2 .
29
so
ag ◦ S2 − ag ◦ S1 ≺ g0 ◦ S2 − g0 ◦ S1 . (1)
Summing (1) over all terms of A, we get
A ◦ S 2 − A ◦ S 1 ≺ g0 ◦ S 2 − g0 ◦ S 1 .
Summing (1) over all terms of B except the dominant term, we get
B ◦ S 2 − B ◦ S 1 ≍ g0 ◦ S 2 − g0 ◦ S 1 .
Therefore, A ◦ S2 − A ◦ S1 ≺ B ◦ S2 − B ◦ S1 , as required.
30
Case U ≻ S1 . Then S2 = S1 + U ∼ U ≻ S1 . If b > 0, then S1b ≺ S2b , so S2b − S1b ∼ S2b .
But if b < 0, then S1b ≻ S2b , so S2b − S1b ∼ −S1b . So we may compute:
if b > a > 0, then S2b − Sb1 ∼ S2b ≻ S2a ∼ S2a − S1a ,
if b > 0 > a, then S2b − S1b ∼ S2b ≻ 1 ≻ S1a ∼ S1a − S2a ,
if 0 > b > a, then S1b − S2b ∼ S1b ≻ S1a ∼ S1a − S2a .
This completes the proof for xa ≺ xb . The computations for log x ≺ xb or xa ≺ log x
are next.
Case U ≺ S1 . Then
S2 S1 + U U S2 U
= =1+ , log(S2 ) − log(S1 ) = log ∼ .
S1 S1 S1 S1 S1
If b > 0 then S2b − S1b ≍ S1b−1 U ≻ U/S1 ∼ log(S2 ) − log(S1 ). And if a < 0 then
S1a − S2a ≍ S1a−1 U ≺ U/S1 ∼ log(S2 ) − log(S1 ).
Case U ≍ S1 . Then U/S1 ∼ c so
U
log(S2 ) − log(S1 ) = log 1 + ∼ log(1 + c) ≍ 1.
S1
If b > 0, then S2b − S1b ≍ S1b ≻ 1 ≍ log(S2 ) − log(S1 ). If a < 0, then S1a − S2a ≍ S1a ≺
1 ≍ log(S2 ) − log(S1 ).
Case U ≻ S1 . Then S2 /S1 ≻ 1 so log(S2 ) − log(S1 ) < log(S2 ). If b > 0, then
S2 − S1b ≍ S2b ≻ log(S2 ) < log(S2 ) − log(S1 ). If a < 0, then S1a − S2a ≍ S1a ≺ 1 4
b
satisfies C.
Proof. First A ∪ {log} satisfies C by Lemma 8.6 and D by Lemma 8.8. Then TA∪{log}
satisfies C by Lemma 8.4.
e so g = eL with L ∈ TA∪{log} purely large and let S1 , S2 ∈ P with S1 < S2 .
Let g ∈ A,
Then g′ = L′ eL so g′ has the same sign as L′ . Take the case g′ > 0. Since L ∈ TA∪{log}
which satisfies C, we have L ◦ S1 < L ◦ S2 . Exponentiate to get g ◦ S1 < g ◦ S2 , as
required.
The case g′ < 0 is done in the same way.
31
Lemma 8.10. Assume TGN ∪{log} satisfies C and D. Let B, L ∈ TGN ∪{log} , with L
purely large, and a = eL ∈ GN +1 . Assume a ≺ 1 ≺ B. Let S1 , S2 ∈ P with S1 < S2 .
Then
B(S2 ) − B(S1 ) ≻ a(S1 ) − a(S2 ).
So
B(S2 ) − B(S1 ) ≻ eL(S1 ) − eL(S2 ) = |a(S2 ) − a(S1 )|.
Case 2. S2 − S1 ≺ GN ◦ S1 . Now S2 − S1 ≺ GN −1 ◦ S1 , so by Proposition 7.1 we
have
32
We will prove this in cases.
Case 1: b(S1 ) ≻ b(S2 ). Then b(S1 ) − b(S2 ) ∼ b(S1 ), so
m(S2 ) − m(S1 )
b(S1 ) ∼ m(S1 ) − m(S2 ) ≺ 1,
b(S2 ) − b(S1 )
as claimed.
Case 2: b(S1 ) 4 b(S2 ). If b(S2 ) > b(S1 ), then apply Lemma 8.10 [to m ≺ 1 ≺ log b]
to get
m(S2 ) − m(S1 ) log b(S2 ) − log b(S1 )
b(S1 ) ≺ b(S1 )
b(S2 ) − b(S1 ) b(S2 ) − b(S1 )
log b(S2 )/b(S1 )
= b(S1 )
b(S2 ) − b(S1 )
b(S2 )/b(S1 ) − 1
< b(S1 ) = 1.
b(S2 ) − b(S1 )
On the other hand, if b(S2 ) < b(S1 ), then again apply Lemma 8.10 [to m ≺ 1 ≺ log b]
to get
m(S1 ) − m(S2 ) log b(S1 ) − log b(S2 )
b(S1 ) ≺ b(S1 )
b(S1 ) − b(S2 ) b(S1 ) − b(S2 )
log b(S1 )/b(S2 )
= b(S1 )
b(S1 ) − b(S2 )
b(S1 )/b(S2 ) − 1 b(S1 )
< b(S1 ) = 4 1.
b(S1 ) − b(S2 ) b(S2 )
So in both cases, we have established (2).
Now compute
a(S2 ) − a(S1 ) = b(S2 )m(S2 ) − b(S1 )m(S1 )
m(S2 ) − m(S1 )
= b(S2 ) − b(S1 ) m(S2 ) + b(S1 )
b(S2 ) − b(S1 )
≺ b(S2 ) − b(S1 ).
The final step uses (2) together with m(S2 ) ≺ 1.
Proposition 8.12. T• = R G• satisfies C and D.
Proof. By Lemmas 8.5 and 8.7 G0 satisfies C and D. Applying Lemmas 8.9 and 8.11
inductively,
S we conclude that GN satisfies C and D for all N ∈ N. And therefore
G• = N GN satisfies C and D by Remark 8.2. Finally T• satisfies C and D by
Lemmas 8.3 and 8.4.
Proposition 8.13. Let R ⊆ T and define R e := { T ◦ log : T ∈ R }. If R satisfies C,
e e
then R satisfies C. If R satisfies D, then R satisfies D.
Proof. Assume R satisfies C. Let Q ∈ R, e so that Q = T ◦ log with T ∈ R. Note
′ ′ ′ ′
Q = (T ◦ log)/x, so that T and Q have the same sign. Let S1 , S2 ∈ P with S1 < S2 .
Then log(S1 ), log(S2 ) ∈ P with log(S1 ) < log(S2 ). Now if T ′ > 0, then applying
property C of R to log(S1 ) and log(S2 ), we get T (log(S1 )) < T (log(S2 )). That is:
Q(S1 ) < Q(S2 ). The case T ′ = 0 and T ′ < 0 are similar.
The proof for D is done in the same way.
33
Theorem 8.14. The whole transline T satisfies C and D.
9 Further Transseries
Suppose we allow well-based transseries, but do not end in ω steps. Begin as in Def-
inition 2.1. Write Wω = W•,• , where ω is the first infinite ordinal. Then proceed by
transfinite recursion:
L If α is an ordinal and Gα has been defined, let Tα = R[[Gα ]]
and Wα+1 = e : L ∈ Tα is purely large . If λ is a limit ordinal and Wα have been
defined for all α < λ, let [
Wλ = Wα .
α<λ
(In the notation of [17, §2.3], H ∈ L and G ∈ Lexp .) This G is interesting (as those
who have thought about convergence
R and divergence of series will know) because: for
actual transseries T , we have T ≻ 1 if and only if T ≻ G. That is, for S ∈ T we have:
if S ≻ 1 then S ′ ≻ G; if S ≺ 1 then S ′ ≺ G. R
So what happens if we attempt to investigate G if possible? It seems that there
is no Schmeling transseries S with S ′ = G.
Y0 = x,
Y1 = log(eax ),
ax
Y2 = log log eae ,
aeax
Y3 = log log log e ae ,
and so on. Iteration of transseries suggests a solution Y not of finite height. It seems
Y should begin
34
and so on; order-type ω. Writing µ1 for e−ax , these terms have coefficient times powers
of µ1 . Beyond all of those, we have terms involving µ2 = exp(−a exp(ax)), beginning
µ2 log(a)µ1 − log(a)2 µ21 + log(a)3 µ31 − log(a)4 µ41 + log(a)5 µ51 + · · ·
log(a)2 log(a)3 − log(a)2 2 2 log(a)3 − log(a)4 3
+µ22 − µ1 + µ1 + µ1
2 2 2
log(a)5 − 3 log(a)4 4 4 log(a)5 − log(a)6 5
+ µ1 + µ1 + · · · + · · ·
2 2
Order-type ω 2 . Beyond all those we have terms involving µ3 = exp(−a exp(a exp(ax)));
order-type ω 3 . And so on with µk of height k for k ∈ N.
Surreal Numbers
If this extension for well-based transseries is continued through all the ordinals, the
result is a large (proper class) real-closed ordered field. With additional operations.
J. H. Conway’s system of surreal numbers [2] is also a large (proper class) real-closed
ordered field, with additional operations. Any ordered field (with a set of elements,
not a proper class) can be embedded in either of these. We can build recursively
a correspondence between the well-based transseries and the surreal numbers. But
involving many arbitrary choices.
[13, p. 16] Is there a canonical correspondence, not only preserving the ordered
field structure, but also some of the additional operations? Or is there a canonical
embedding of one into the other? Perhaps we need to take the recursive way in which
one of these systems is built up and find a natural way to imitate it in the other system.
Reals should correspond to reals. The transseries x should correspond to the surreal
number ω. But there are still many more details not determined just by these.
References
[1] M. Aschenbrenner, L. van den Dries, Asymptotic differential algebra. In [6],
pp. 49–85
[2] J. H. Conway, On numbers and games. Second edition. A K Peters, Natick, MA,
2001
[3] O. Costin, Topological construction of transseries and introduction to generalized
Borel summability. In [6], pp. 137–175
[4] O. Costin, Global reconstruction of analytic functions from local expansions and
a new general method of converting sums into integrals. preprint, 2007.
http://arxiv.org/abs/math/0612121
[5] O. Costin, Asymptotics and Borel Summability. CRC Press, London, 2009
[6] O. Costin, M. D. Kruskal, A. Macintyre (eds.), Analyzable Functions and Appli-
cations (Contemp. Math. 373). Amer. Math. Soc., Providence RI, 2005
[7] L. van den Dries, A. Macintyre, D. Marker, Logarithmic-exponential series. Annals
of Pure and Applied Logic 111 (2001) 61–113
35
[8] G. Edgar, Transseries for beginners. preprint, 2009.
http://arxiv.org/abs/0801.4877 or
http://www.math.ohio-state.edu/∼edgar/preprints/trans begin/
[9] G. Edgar, Transseries: ratios, grids, and witnesses. forthcoming
http://www.math.ohio-state.edu/∼edgar/preprints/trans wit/
[10] G. Edgar, Fractional iteration of series and transseries. preprint, 2009.
http://www.math.ohio-state.edu/∼edgar/preprints/trans frac/
[11] G. Higman, Ordering by divisibility in abstract algebras. Proc. London Math. Soc.
2 (1952) 326–336
[12] J. van der Hoeven, Operators on generalized power series. Illinois J. Math. 45
(2001) 1161–1190
[13] J. van der Hoeven, Transseries and Real Differential Algebra (Lecture Notes in
Mathematics 1888). Springer, New York, 2006
[14] J. van der Hoeven, Transserial Hardy fields. preprint, 2006
[15] S. Kuhlmann, Ordered Exponential Fields. American Mathematical Society, Prov-
idence, RI, 2000
[16] S. Scheinberg, Power series in one variable. J. Math. Anal. Appl. 31 (1970) 321–
333
[17] M. C. Schmeling, Corps de transséries. Ph.D. thesis, Université Paris VII, 2001
36