Nothing Special   »   [go: up one dir, main page]

Transseries: Composition, Recursion, and Convergence: G. A. Edgar November 1, 2018

Download as pdf or txt
Download as pdf or txt
You are on page 1of 36

Transseries: Composition,

Recursion, and Convergence


G. A. Edgar
arXiv:0909.1259v1 [math.RA] 7 Sep 2009

November 1, 2018

Abstract
Additional remarks and questions for transseries. In particular: properties of com-
position for transseries; the recursive nature of the construction of R x ; modes of
convergence for transseries. There are, at this stage, questions and missing proofs in
the development.

Contents
1 Introduction 1

2 Well-Based Transseries 3

3 The Recursive Structure of the Transline 8

4 Properties of Composition 12

5 Taylor’s Theorem 20

6 Topology and Convergence 22

7 Proof for the Simplest Taylor Theorem 27

8 Proof for Propositions 4.9 and 4.10 29

9 Further Transseries 34

1 Introduction
Most of the calculations done with transseries are easy, once the basic framework is
established. But that may not be the case for composition of transseres. Here I will
discuss a few of the interesting features of composition.
The ordered differential field T = R x = R G of (real grid-based) transseries
is completely explained in my recent expository introduction [8]. A related paper is

1
[9]. Other sources for the definitions are: [1], [3], [7], [13], [15]. I will generally follow
the notation from [8]. Van der Hoeven [13] sometimes calls T the transline.
So T is the set of all grid-based real formal linear combinations of monomials from
G, while G is the set of all eL for L ∈ T purely large. (Because of logarithms, there is
no need to write separately two factors as xb eL .)
Notation 1.1. For transseries A, we already use exponents An for multiplicative powers,
and parentheses A(n) for derivatives. Therefore let us use square brackets A[n] for
compositional powers. In particular, we will write A[−1] for the compositional inverse.
Thus, for example, expn = exp[n] = log[−n] .
Write ln for logn if n > 0; write l0 = x; write ln = exp−n if n < 0.
Recall [8, Prop. 3.24 & Prop. 3.29] two canonical decompositions for a transseries:

Proposition 1.2 (Canonical Additive Decomposition). Every A ∈ R G may be


written uniquely in the form A = L + c + V , where L is purely large, c is a constant,
and V is small.

Proposition 1.3 (Canonical Multiplicative Decomposition). Every nonzero transser-


ies A ∈ R G may be written uniquely in the form A = a · g · (1 + U ) where a is
nonzero real, g ∈ G, and U is small.

Notation 1.4. Little-o and big-O. For A 6= 0 we define sets,

o(A) := { T ∈ T : T ≺ A } , O(A) := { T ∈ T : T 4 A } .

These are used especially when A is a monomial, but o(A) = o(mag A). Conventionally,
we write T = U + o(A) when we mean T ∈ U + o(A) or T − U ≺ A.
Notation 1.5. For use with a finite ratio set µ ⊂ Gsmall , we define

oµ(A) := { T ∈ T : T ≺µ A } , Oµ(A) := { T ∈ T : T 4µ A } .

This time monomials do not suffice: if µ = {x−1 , e−x }, then oµ(x−1 + e−x ) 6= oµ(x−1 ).
Remark 1.6. Note the simple relationship between < and ≺: Define |T | = T if T ≥ 0,
|T | = −T if T < 0. Then

U ≺ V ⇐⇒ |U | < k|V | for all k ∈ R, k > 0,


U 4 V ⇐⇒ |U | < k|V | for some k ∈ R, k > 0,
1 U
U ≍ V ⇐⇒ < < k for some k ∈ R, k > 1,
k V
1 U
U ∼ V ⇐⇒ < < k for all k ∈ R, k > 1.
k V
The reason we can do this is the following interesting property: if 1/k < T < k for
some k ∈ R, k > 1, then there is c ∈ R, c > 0, with T ∼ c.
Remark 1.7. Worth noting: If 0 < A ≤ B, then A 4 B. If 0 > A ≥ B, then A 4 B. If
A > 0, B > 0, A ≺ B, then A < B. If A < 0, B < 0, A ≺ B, then A > B.

2
2 Well-Based Transseries
Besides the grid-based transseries as found in [8], we may also refer to the well-based
version as found, for example in [7] or [15].
Definition 2.1. For an ordered abelian group M, let R[[M]] be the set of Hahn series
with support which is well ordered (according to the reverse of ≻). Begin with group
W0 = { xa : a ∈ R } and field T0 = R[[W0 ]]. Assuming field TN = R[[WN ]] has been
defined, let n o
WN +1 = xb eL : L ∈ TN is purely large

and TN +1 = R[[WN +1 ]]. Then



[ ∞
[
W• = WN , T• = TN .
N =0 N =0

Now as before,

W•,M = { g ◦ logM : g ∈ W• } , T•,M = { T ◦ logM : T ∈ T• } ,



[ ∞
[
W•,• = W•,M , T•,• = T•,M .
M =0 M =0

A difference from the grid-based case: T•,• 6= R[[W•,• ]]. The domain of exp is T•,• and
not all of R[[W•,• ]].
Then T = T•,• is what I will mean here by “well based” transseries. This is the
system found in [7], for example. This system and others are explored in [15].
We have used letter Fraktur G (G) for “grid” and letter Fraktur W (W) for “well”.
Notation T is used for both, perhaps that will be confusing? It is intended that what
I say here can usually apply to either case.
Here is one of the results that the well-based theory depends on. (It is required,
for example, to show that T −1 has well-ordered support.) I am putting it here because
of its tricky proof. The result is attributed to Higman, with this proof due to Nash-
Williams.

Proposition 2.2. Let M be a totally ordered abelian group. Let B ⊆ Msmall be a set
of small elements. Write B∗ for the monoid generated by B. If B is well ordered (for
the reverse of ≻), then B∗ is also well ordered.

Proof. Write BS
n for the set of all products of n elements of B. Thus: B0 = {1},
B1 = B, B∗ = ∞ ∗
n=0 Bn . If g ∈ B , define the length of g as

l(g) = min { n : g ∈ Bn } .

Since M is totally ordered, these are equivalent:


(i) B is well ordered (every nonempty subset has a greatest element),
(ii) any infinite sequence in B has a nonincreasing subsequence,
(iii) there is no infinite strictly increasing sequence in B.
We assume B is well ordered, so it has all three properties. We claim B∗ is well
ordered.

3
Suppose (for purposes of contradiction) that there is an infinite strictly increasing
sequence in B∗ . Among all infinite strictly increasing sequences in B∗ , let l1 be the
minimum length of the first term. Choose n1 that has length l1 and is the first term of
an infinite strictly increasing sequence in B∗ . Recursively, suppose that finite sequence
n1 ≺ n2 ≺ · · · ≺ nk has been chosen so that it is the beginning of some infinite
strictly increasing sequence in B∗ . Among all infinite strictly increasing sequences in
B∗ beginning with n1 , · · · , nk , let lk+1 be the minimum length of the (k + 1)st term.
Choose nk+1 of length lk+1 such that there is an infinite strictly increasing sequence
in B∗ beginning n1 , · · · , nk , nk+1 . This completes a recursive definition of an infinite
strictly increasing sequence (nk ) in B∗ .
Now because all elements of B are small and this sequence is strictly increasing,
nk 6= 1. For each k, choose a way to write nk as a product of lk elements of B, then
let bk ∈ B be least of the factors. So nk = bk mk . Now (bk ) is an infinite sequence in
B, so there is a subsequence (bkj ) with bk1 < bk2 < · · · . So
nk j nk nk
mkj = ≺ j+1 4 j+1 = mkj+1
b kj b kj bkj+1

and (if k1 > 1)


nk 1
nk1 −1 ≺ nk1 4 = mk1 .
b k1
So n1 ≺ n2 ≺ · · · ≺ nk1−1 ≺ mk1 ≺ mk2 ≺ mk3 ≺ · · · is an infinite strictly increasing
sequence in B∗ . But it begins with n1 , · · · , nk1 −1 and l(mk1 ) = lk1 − 1, contradicting
the minimality of lk1 . This contradiction shows that there is, in fact, no infinite strictly
increasing seuqence in B∗ . So B∗ is well ordered.

Notation 2.3. For N ∈ N, N ≥ 1, write



Wpure
N = eL : L purely large, supp L ⊂ WN −1 \ WN −2 ,

Wpure
0 = W0 , W−1 = {1}.
Of course the sets Wpure
N are subgroups of W• . Any g ∈ WN can be written
uniquely as g = ab with a ∈ WN −1 and b ∈ Wpure N . Group WN is the direct product
of subgroups:
WN = Wpure
0 · Wpure
1 · · · Wpure pure
N −1 · WN .

A set A ⊂ WN is decomposed as

A = { ab : b ∈ B, a ∈ Ab } , (∗)

where B ⊂ WpureN , and for each b ∈ B, the set Ab ⊂ WN −1 . The ordering in A is


lexicographic:

a1 b1 ≺ a2 b2 ⇐⇒ b1 ≺ b2 or {b1 = b1 and a1 ≺ a2 }.

So the set A is well ordered if and only if set B and all sets Ab are well ordered.
The lexicographic ordering is the “height wins” rule:

Proposition 2.4. Let N ∈ N, N ≥ 1. If g ∈ WN \ WN −1 and supp T ⊂ WN −1 , then:


T ≺ g if g ≻ 1 and T ≻ g if g ≺ 1.

4
Decomposition of Sets
I include here a few more uses of the decomposition (∗). Skip to Section 3 if you are
primarily interested in the grid-based version of the theory.
Write m† = m′ /m for the logarithmic derivative. In particular, if m = eL ∈ Wpure N ,
N ≥ 2, then m† = L′ is supported in Wlarge N −1 \ W N −2 , and if m = e L ∈ Wpure , then
1
m† = L′ is supported in W0 . P
The existence
P of the derivative for transseries is stated like this: If T = g∈A cgg,
then T ′ = g∈A cgg′ . Let us consider it more carefully.
P
Theorem 2.5. Let A ⊆ WN,M be well ordered, and let T = g∈A cgS g in TN,M have
support A. Then (i) the family { supp(g ′ ) : g ∈ A } is point-finite; (ii) ′
P g∈A supp(g ) is
well ordered; (iii) g∈A cgg′ exists in T•,• .
This is proved in stages.
P
Proposition 2.6. Let A ⊆ W0 be well ordered, and let T = Sg∈A cgg have support
A. Then (i) P the family { supp(g′ ) : g ∈ A } is point-finite; (ii) g∈A supp(g′ ) is well
ordered; (iii) g∈A cgg′ exists in T0 .

Proof. Since (xb )′ = bxb−1 , the family { supp(g′ ) : g ∈ A } is disjoint. Then


[
supp(g′ ) ⊆ x−1 A,
g∈A

so it is well ordered. (iii) follows from (i) and (ii).


P
Proposition 2.7. Let A ⊆ WN be well ordered, and let T = Sg∈A cgg have support
A. Then (i) P the family { supp(g′ ) : g ∈ A } is point-finite; (ii) g∈A supp(g′ ) is well
ordered; (iii) g∈A cgg′ exists in TN .
Proof. This will be proved by induction on N . The case N = 0 is Proposition 2.6.
Now let N ≥ 1 and assume the result holds for smaller values. Decompose A as usual:

A = { ab : b ∈ B, a ∈ Ab } , (∗)

where B ⊂ Wpure N is well ordered, and for each b ∈ B, the set Ab ⊂ WN −1 is well
ordered. Now if g = ab ∈ A, b ∈ Wpure ′ ′ †
N , a ∈ WN −1 , then g = (a + ab )b and
supp(a′ + ab† ) ⊂ GN −1 .
(i) Let m ∈ W belong to some supp(g′ ). It could be that m ∈ supp(a′ )b, b ∈ B, a ∈
Ab; this happens for only one b and only finitely many a by the induction hypothesis.
Or it could be that m ∈ supp(ab† )b. This happens for only one b and (since both Ab
and supp b† are well ordered) only finitely many a. So, in all, m ∈ supp(g′ ) for only
finitely many g ∈ A.
(ii) For b ∈ B, let
 [
Cb = Ab · supp b† ∪ supp(a′ ).
a∈Ab

So using the induction hypothesis and [8, Prop. 3.27], we conclude that Cb ⊂ WN −1 is
well ordered. Therefore
[
supp(g′ ) ⊆ { ab : b ∈ B, a ∈ Cb }
g∈A

5
is also well ordered since it is ordered lexicographically.
(iii) follows from (i) and (ii).

Proof of Theorem 2.5. Recall the notation lm = log ◦ log ◦ · · · ◦ log with m logarithms
(m > 0), l0 = x, l−m = exp ◦ exp ◦ · · · ◦ exp with m exponentials. Note for m ≥ 1,
l′m = 1/(xl1 l2 · · · lm−1 ) ∈ Wm−1,m−1 .
Define A1 = { g ◦ l−M : g ∈ A }. Then A1 is well ordered and A1 ⊆ WN . Thus the
previous result applies to A1 . Now for g ∈ A we have g = g1 ◦lM , g1 ∈ A1 , and g′ = (g′1 ◦
lM )·l′M . So supp(g′ ) = (supp(g′1 )◦lM )·l′M . Both correspondences (compose with lM and
multiply by l′M ) are bijective and order-preserving. So the S family { supp(g
′) : g ∈ A }

is point-finite ′ ′
since { supp(g1 ) : g1 ∈ A1 } is point-finite; g∈A supp(g ) is well-ordered
S
since g1 ∈A1 supp(g′1 ) is well-ordered. And supp(g′ ) ⊂ Wmax(N,M ),M , so T ′ ∈ T•,• .

Now we consider a set closed under derivative in a certain sense: a single well
ordered set that supports all derivatives of some T .

Proposition 2.8. Let A ⊂ W satisfy: A is log-free; A is well ordered; m† 4 1 for all


m ∈ A. Then there is Ae such that: A
e ⊇ A; A
e is log-free; A
e is well ordered; m† 4 1 for
e e ′ e
all m ∈ A; if m ∈ A then supp(m ) ∈ A.
2
Proof. Let A be log-free and well ordered with m† 4 1 for all m ∈ A. Now (ex )† =
2x ≻ 1, so by “height wins” A ⊂ W1 . We may decompose A by factoring each g ∈ A
as g = xb eL , so that n o
A = xb eL : eL ∈ B, xb ∈ AL ,

where B is well ordered and, for each eL ∈ B, the set AL ⊆ W0 is well ordered; the
ordering is lexicographic:

xb1 eL1 ≺ xb2 eL2 ⇐⇒ L1 < L2 or { L1 = L2 and b1 < b2 }.

Now fix an L with eL ∈ B. (Of course L = 0 is allowed.) Then L′ 4 1, so supp L′


is a well ordered set in W0 with m 4 1 for all m ∈ supp L′ . The monoid (supp L′ )∗
generated by supp L′ is well-ordered. So
e L := (supp L′ )∗ · AL · {1, x−1 , x−2 , x−3 , · · · }
A

is well ordered. Define


n o
e :=
A eL
xb eL : eL ∈ B, xb ∈ A .

Because the ordering is lexicographic, A e is also well ordered. Note A ⊆ A


e ⊂ W1 . If
b L e b L † −1 ′ b L e
x e ∈ A, then (x e ) 4 x +L 4 1. Let m = x e ∈ A. Then m = (bx +xb L′ )eL .
′ b−1
e L and supp(xb L′ ) ⊆ A
But xb−1 ∈ A e L . Therefore supp(m′ ) ⊆ A.
e

Note: Let Ae ⊆ W with ex2 ∈ A e and if m ∈ A


e then supp(m′ ) ⊆ A.
e Such Ae cannot be
j x 2
well ordered, since it contains x e for all j ∈ N. But there are at least the following
two propositions.

Proposition 2.9. Let e ∈ WN \ WN −1 , e ≺ 1. Let A ⊂ WN be well ordered such that


e ⊂ WN such that A
m† 4 1/(xe) for all m ∈ A. Then there exists well ordered A e ⊇A
e ′ e
and if g ∈ A, then supp(xeg ) ⊆ A.

6
Proof. Write e = e0 e1 with e0 ∈ WN −1 , e1 ∈ Wpure N , e1 ≺ 1. Now for g = ab ∈ A, we
have
xeg′ = xe0 e1 (a′ b + ab′ ) = (xe0 a′ + xe0 ab† ) · (e1 b). (1)
with e1 b ∈ Wpure
N and support of the first factor in WN −1 . Applying this again:
 
(xe∂)2 g = xe0 (xe0 a′ + xe0 ab† )′ + xe0 (xe0 a′ + xe0 ab† )b† e1 b.

Continue many times: (xe∂)j g = V · ej1 b, supp V ⊂ WN −1 , every term in V has the
following form: some a ∈ Ab, or some derivative, up to order j, multiplied by factors
chosen from x, e1 , b† , e†1 , or derivatives of these, up to order j, each to a power at most
j. So there are finitely many well ordered sets involved.
Now let Be = B · {1, e1 , e2 , · · · }. Thus B ⊆ B e ⊂ Wpure and B e is well ordered. Fix
1 N
m ∈ B.e Because B is well-ordered and e1 ≺ 1, we have m = bej with b ∈ B for only
1
finitely many different values of j. For each such j we get a well ordered set in WN −1 .
Since there are finitely many j, in all we get a well ordered set, call it A e m. Our final
result is n o
e=
A e a∈A
am : m ∈ B, em ,

e is well ordered. From (1) we conclude: if g ∈ A,


again with lexicographic order. So A e
′ e
then supp(xeg ) ⊆ A.

Proposition 2.10. Let e ∈ WN \ WN −1 , e ≺ 1. Let A ⊂ W• be well ordered such that


e ⊂ W• such that A
m† 4 1/(xe) for all m ∈ A. Then there exists well ordered A e ⊇A
e then supp(xeg ) ⊆ A.
and if g ∈ A, ′ e

Proof. Let n be minimum such that A ⊂ Wn . If n = N , then this has been proved
in Proposition 2.10. In fact, if n < N the proof in Proposition 2.10 still works with
B = {1}. We proceed by induction on n. Assume n > N and the result is true for
smaller n. Decompose A as usual:

A = { ab : b ∈ B, a ∈ Ab } ,

where B ⊂ Wpure n is well ordered, and for each b ∈ B, the set Ab ⊂ Wn−1 is well
ordered. For g = ab ∈ A,
xeg′ = (xea′ + xeab† )b. (2)
 ∗
Now supp(xeb† ) is well ordered and 4 1, so the monoid supp(xeb† ) generated by
 ∗
it is well ordered, so Ab · supp(xeb† ) is well ordered. By the induction hypothesis,
there exists well ordered Ae b such that
∗
Ab · supp(xeb† ) ⊆ A e b ⊂ Wn−1 ,

e b then supp(xem′ ) ⊆ A
and if m ∈ A e b. Then define
n o
e = ab : b ∈ B, a ∈ A
A eb ,

e then supp(xeg′ ) ⊆ A.
which is again well ordered. From (2) we conclude: if g ∈ A, e

7
3 The Recursive Structure of the Transline
Proposition 3.1 (Inductive Principle). Let R ⊆ T. Assume:
(a) a ∈ R for all constants a ∈ R.
(b) x ∈ R.
(c) If A, B ∈ R, then AB ∈ R.
P
(d) If Ai ∈ R for all i in some index set, and Ai → 0, then Ai ∈ R.
(e) If A ∈ R, then eA ∈ R.
(f) If A ∈ R, then A ◦ log ∈ R.
Then R = T.
Proof. This principle is clear from the definition for T in [8] once we observe:
(i) x ◦ log = log(x), so log(x) ∈ R by (b) and (f). (ii) If b ∈ R, then b log(x) ∈ R by (a)
and (c). (iii) eb log(x) = xb , so xb ∈ R by (e). (iv) Once the terms
P of a purely large L
are known to be in R, we get monomial xb eL ∈ R. (v) If T = cj gj and monomials
gj ∈ R, then T ∈ R. (vi) If T ∈ R, then T ◦ logM ∈ R.
In fact, the set of conditions can be reduced:
Corollary 3.2. Let R ⊆ T = R G , and identify G as a subset of T as usual.
Assume:
(d′ ) If supp A ⊆ R, then A ∈ R.
(e′ ) If b ∈ R and L ∈ R is purely large and log-free, then xb eL ∈ R.
(f ′ ) If g ∈ R is a monomial, then g ◦ log ∈ R.
Then R = T.
Proof. Since supp 0 = ∅, we get 0 ∈ R by (d′ ); but 0 is purely large and log-free, so
1, x ∈ R by (e′ ). Follow the construction in [8].
Another inductive form (see [13]):
Corollary 3.3. Let R ⊆ T = R G , and identify G as a subset of T as usual.
Assume:
(b′′ ) For all n ∈ N, ln ∈ R.
(d′′ ) If supp A ⊆ R, then A ∈ R.
(e′′ ) If L ∈ R is purely large, then eL ∈ R.
Then R = T.
Proof. First, log x ∈ R by (b′′ ). For any b ∈ R, b log x is purely large, so eb log x = xb ∈ R
by (e′′ ). Next, T0 ⊆ R and b log x + L ∈ R for any purely large L ∈ T0 by (d′′ ), so
eb log x+L = xb eL ∈ R. Thus G1 ⊆ R so T1 ⊆ R. Continuing inductively, Gn , Tn ⊆ R
for all n ∈ N. So T• ⊆ R.
Note that Re := { T ∈ T : T ◦ log ∈ R } also satisfies the three conditions, so by the
preceding paragraph T• ⊆ R, e and T•1 ⊆ R. Continuing inductively, T•m ⊆ R for all
m ∈ N. So T•• ⊆ R and R = T.
Question 3.4. Is there a good recursive formulation for P or S? See 4.1.

8
The Schmeling Tree of a Transmonomial
Let g be a transmonomial, g ∈ G. Then g = eL , where L ∈ T is purely large. P So
L = c0 g0 + c1 g1 + · · · where ci ∈ R and gi ∈ Glarge . We may index this as L = i ci gi ,
where i runs over some ordinal (an ordinal < ω ω for the grid-based case; just countable
for the well-based case; possibly finite; possibly just a single term; or even no terms at
all if g = 1). P
In turn, each gi = eLi , where Li ∈ T is purely large and positive. So Li = j cij gij ,
where index j runs over some ordinal (possibly a differentPordinal for different i).
Continuing, each gij = eLij , where Lij ∈ T, and Lij = k cijk gijk where gijk ∈
G. And soPon: each gi1 i2 ...is is in Glarge , and has the form gi1 i2 ···is = eLi1 i2 ···is , and
Li1 i2 ···is = j ci1 i2 ···is j gi1 i2 ···is j .
Say the original monomial g has height N ; that is, in the terminology of [8], g ∈
GN,• . Then eventually (with s ≤ N ) we reach gi1 i2 ···is = (lm )b for some m, and if b 6= 1,
then in one more step we get gi1 i2 ···is+1 = lm+1 . Let us stop a “branch” i1 , i2 , · · · when
we reach some lm (even if m ≤ 0 so that we have x or expn x).
The structure of the monomial g then corresponds to a Schmeling tree. (We have
adapted this tree discription from Schmeling’s thesis [17].) Each node corresponds to
some monomial. The root corresponds to g. The children of g are the gi . A leaf
corresponds to some logm x, and is labeled by the integer m. Each node that is not
a leaf has countably many children, arranged in an ordinal, and each edge is labeled
by a real number. All nodes gi1 i2 ···is in the tree (except possibly the root g) are large
monomials.
Example 3.5. Consider the following example. The ordinals here are all finite, so that
everything can be written down.
4 4 2
−e4e2x − x − (2/3)ex + 3eπex − 2x + log x
g=e
   
4
= exp − exp 4 exp 2x − x − (2/3) exp x
   !
4 2
+ 3 exp π exp x − 2x + log x .

The component parts of the tree:

2x4 − x − (2/3)ex x4 − 2x2 + log x


g0 = e4e , c0 = −1, g1 = eπe , c1 = 3,
4
g00 = e2x − x , c00 = 4, g01 = ex = log−1 x, c01 = −2/3,
4 2
g10 = ex − 2x , c10 = π, g11 = log x = log1 x, c11 = 1,
4 4 log x
g000 = x = e , c000 = 2, g001 = x = log0 x, c001 = −1,
4 4 log x
g100 = x = e , c100 = 1, g101 = x2 = e2 log x , c101 = −2,
g0000 = g1000 = g1010 = log x = log1 x, c0000 = c1000 = 4, c1010 = 2.

The tree representing g is shown in Figure 1.


There are notions of “height” and “depth” associated with such a tree-representation
of a transmonomial g. Let us say that g has tree-height N iff the longest branch (from

9
Figure 1: The Schmeling tree corresponding to monomial g

root to leaf) has N edges; and that g has tree-depth M iff M is the largest label on a
leaf. So the example in Figure 1 has tree-height 4 and tree-depth 1. These definitions
are convenient for analysis of such a tree diagram. They may differ from the notions
of “height” and “depth” defined in [8]. If g has height N (that is, g ∈ GN • ), then g
has tree-height at most N + 1. But it may be much smaller; for example,
e x
g = ee + x

has tree-height 1 but height 3. If g has depth M (that is, g ∈ G•M ), then g has
tree-depth M or M + 1, at least if we have allowed negative values of M . The same
example g has depth 0 and tree-depth 0, but
e x2
g = ee + x

has depth 0 and tree-depth 1.


Tree-height and tree-depth behave in the same way as height and depth under
composition on the right by log or exp. That is: if g has tree-height N and tree-depth
M , then g ◦ exp has tree-height N and tree-depth M − 1, and g ◦ log has tree-height
N and tree-depth M + 1. Any g ∈ G0 has tree-depth ≤ −1, so g ◦ exp has tree-depth
≤ 0. If tree-depth is ≤ 0 is it sometimes convenient to extend all branches (using single
edges with coefficient 1) so that all leaves are x.

Schmeling Tree and Deriviative


Let g be a transmonomial represented as a Schmeling tree. What are the monomials in
the support of the derivative g′ ? Since g = eL , the derivative is eL L′ , so the monomials
in its support have the form g times a monomial in the support of L′ . Continuing this
recursively, we see that a monomial in supp g′ looks like

g gi1 gi1 i2 · · · gi1 i2 ···is−1 (logm x)′ (1)

where s is chosen so that gi1 i2 ···is = logm x, and of course (logm x)′ is itself a monomial.
(The monomials gi1 , · · · , gi1 i2 ···is−1 are large, but if m > 0, then the monomial (logm x)′

10
is small.) So there is one term of g′ for each branch (from root to leaf) of the tree. In
the derivative g′ , the coefficient for monomial (1) is
ci1 ci1 i2 · · · ci1 i2 ···is ,
the product of all the edge-labels on the corresponding branch.
Example 3.6. Following the tree in the example (Figure 1), we may write the derivative
g′ with one term for each of the six branches of the tree:
g′ = (−1) · 4 · 2 · 4 · gg0 g00 g000 · (log x)′
+ (−1) · 4 · (−1) · gg0 g00 · x′
+ (−1) · (−2/3) · gg0 · (exp x)′
+ 3 · π · 1 · 4 · gg1 g10 g100 · (log x)′
+ 3 · π · (−2) · 2 · gg1 g10 g101 · (log x)′
+ 3 · 1 · gg1 · (log x)′ .
The monomial (1) without the first factor g is an element of the set lsupp(g). The
magnitude of g′ is the monomial we get following the left-most branch
g g0 g00 · · · g00···0 (logm x)′ ,
since all other branches are far smaller.
In the special case where the tree-depth of g is ≤ 0, and we extend all branches so
that all leaves are x, the monomials in g′ are
g gi1 gi1 i2 · · · gi1 i2 ···is−1 (0)
where s is chosen with gi1 i2 ···is = x. In this case, all monomials gi1 · · · gi1 i2 ···is−1 in
lsupp g are large, and we have
m := max lsupp g = g0 g00 · · · g00···0 = mag(g′ /g).
Then g′ ∼ gm, and we get g(n) ∼ gmn for all n ∈ N by induction using m2 ≻ m′ [8,
Prop. 3.82(iv)]. (This may not hold when g has positive tree-depth.)
Proposition 3.7. Let T, V ∈ T. Assume all monomials in T have tree-depth ≤ 0, and
V ≺ 1/m where m = max lsupp T . Then
T (n) V n , n∈N
is point-finite, so the series

X Vn
T (n) (x)
n!
n=0
converges in the asymptotic topology.
Proof. Fix finite set µ ⊂ Gsmall so that all far-smaller inequalities are witnessed by
µ: in particular, V ≺µ 1/m and T = dom(T ) · (1 + S) with S ≺µ 1. Note that
T (n+1) ∼ mT (n) . Then
T ≻µ T ′ V ≻µ T ′′ V 2 ≻µ . . . ,
P (n)
so by [8, Prop. 4.17] the series T (x)V n /n! is point-finite.
Remark 3.8. The same result should be true for other T , perhaps using tsupp not
lsupp; see [9, Def. 7.1].

11
4 Properties of Composition
Composition T ◦ S is defined when T, S ∈ T and S is large and positive. As usual we
will write T = T (x) and T ◦ S = T (S).
Notation 4.1. Write P for the group of large positive transseries. And S for the subgroup
S = x + o(x) = { T ∈ T : dom T = x } = { T ∈ T : T ∼ x }. For now, think of P and
S as sets. They are closed under composition. For existence of inverses: well-based,
Proposition 4.20; grid-based, [9, Sec. 8].
Many basic properties of composition may be proved by applying an inductive prin-
ciple such as Proposition 3.1 to the left composand T . (I may—perhaps misleadingly—
call this “induction on the height”.) Here are some examples.
Proposition 4.2. Let T, T1 , T2 ∈ T, S ∈ P. Then

T > 0 =⇒ T ◦ S > 0,
T = 0 =⇒ T ◦ S = 0,
T < 0 =⇒ T ◦ S < 0,
T1 < T2 =⇒ T1 ◦ S < T2 ◦ S,
T1 = T2 =⇒ T1 ◦ S = T2 ◦ S,
T1 > T2 =⇒ T1 ◦ S > T2 ◦ S,
T ≺ 1 =⇒ T ◦ S ≺ 1,
T ≻ 1 =⇒ T ◦ S ≻ 1,
T ≍ 1 =⇒ T ◦ S ≍ 1,
T ∼ 1 =⇒ T ◦ S ∼ 1,
T1 ≺ T2 =⇒ T1 ◦ S ≺ T2 ◦ S,
T1 ≻ T2 =⇒ T1 ◦ S ≻ T2 ◦ S,
T1 ≍ T2 =⇒ T1 ◦ S ≍ T2 ◦ S,
T1 ∼ T2 =⇒ T1 ◦ S ∼ T2 ◦ S,
T ◦ S ≍ mag(T ◦ S) = mag((mag T ) ◦ S) ≍ (mag T ) ◦ S,
T ◦ S ∼ dom(T ◦ S) = dom((dom T ) ◦ S) ∼ (dom T ) ◦ S.

Some corresponding things may fail for the other composand: Let T = ex , S1 =
x + log x, and S2 = x. Then S1 ≍ S2 but T ◦ S1 ≻ T ◦ S2 ; dom(T ◦ S1 ) 6≍ T ◦ dom S1 .
Proposition 4.3. Let S1 , S2 ∈ P, S1 < S2 .
(a) if c ∈ R, c > 0, then S1c < S2c ,
(b) if c ∈ R, c < 0, then S1c > S2c ,
(c) log(S1 ) < log(S2 ).
(d) exp(S1 ) < exp(S2 ),
Proof. (a) Write the canonical multiplicative decomposition S1 = a1 eL1 (1 + U1 ) as in
1.3, and similarly S2 = a2 eL2 (1 + U2 ). Then
   
X∞ ∞
X
S1c = ac1 ecL1 1 + cU1 + cj U1j  , S2c = ac2 ecL2 1 + cU2 + cj U2j  , (1)
j=2 j=2

12
for certain (binomial) coefficients cj . Now for S1 < S2 there are these cases: (i) L1 <
L2 ; (ii) L1 = L2 , a1 < a2 ; (iii) L1 = L2 , a1 = a2 , U1 < U2 . But in each of these cases,
applying equations (1) shows S1c < S2c . For case (iii):
 

X
S2c − S1c = ac1 ecL1 (U2 − U1 ) c + cj (U2j−1 + U2j−2 U1 + · · · + U1j−1 ) > 0
j=2
P
since the terms in the are all ≺ 1.
(b) is similar.
(c) Write canonical multiplicative decomposition S1 = a1 eL1 (1 + U1 ) as in 1.3 and
similarly S2 = a2 eL2 (1 + U2 ). Then

X
log(S1 ) = log(a1 ) + L1 + U1 + cj U1j ,
j=2

X
log(S2 ) = log(a2 ) + L2 + U2 + cj U2j ,
j=2

for certain coefficients cj . The same cases (i)—(iii) may be used, and in each case we
get log(S1 ) < log(S2 ). Case (iii) has reasoning as we did before for (a).
(d) For this, write the canonical additive decomposition S1 = L1 + c1 + U1 as in
1.2, and similarly S2 = L2 + c2 + U2 . Then
   

X X∞
eS1 = ec1 eL1 1 + U1 + cj U1j  , eS2 = ec2 eL2 1 + U2 + cj U2j  ,
j=2 j=2

for certain coefficients cj . For S1 < S2 there are three cases: (i) L1 < L2 ; (ii) L1 =
L2 , c1 < c2 ; (iii) L1 = L2 , c1 = c2 , U1 < U2 . In all three cases we get eS1 < eS2 .

Proposition 4.4. (a) If T ∈ T, T > 0, T 6= 1, then log T < T − 1. (b) If T ∈ T,


T 6= 0, then exp T > T + 1

Proof. First note: If L is purely large and positive, then eL ≻ L. First use [8,
Prop. 3.72] for log-free L. Then Proposition 4.2 to compose with logM on the inside.
It follows that: If T ≻ 1 and T > 0, then eT ≻ T .
(a) Write A = log(T ) − T + 1; I must show A < 0. Write canonical
P∞multiplicative
decomposition T = ae (1 + U ) as in 1.3. Then log(T ) = log(a) + L − j=1 (−1)j U j /j.
L

Now if L > 0, then T ≻ 1, T ≻ L ≻ 1, so A ∼ −T < 0. If L < 0, then T ≺ 1 ≺ L, so


A ∼ L < 0. So assume L = 0. Now if a 6= 1, then A ∼ log(a) − a + 1, which is < 0 by
the ordinary real Taylor theorem. So assume a = 1. Then if U 6= 0 we have

X (−1)j U j U2
A=− − (1 + U ) + 1 = − + o(U 2 ) < 0.
j 2
j=1

So the only case left is U = 0, and that means T = 1.


(b) Write A = exp T − T − 1; I must show A > 0. Write canonical additive
decomposition T = L + c + V as in 1.2. So exp T = eL ec (1 + V + . . . ). If L > 0, then
T ≻ 1, eT ≻ T ≻ 1, so A ∼ eT > 0. If L < 0, then eT ≺ 1, T ∼ L ≻ 1, so A ∼ −L < 1.

13
So assume L = 0. If c 6= 0, then A ∼ ec − c − 1, which is > 0 by the ordinary real
Taylor theorem. So assume c = 0. Then if V 6= 0 we have

X Vj V2
A= −V −1= + o(V 2 ) > 0.
j! 2
j=0

So the only case left is V = 0, and that means T = 0.

Exponentiality
Associated to each large positive transseries is an integer known as its “exponentiality”
[13, Exercise 4.10]. If you compose with log sufficiently many times on the left, the
magnitude is a leaf lm . The number p in the following result is the exponentiality of
Q, written p = expo Q.

Proposition 4.5. Let Q ∈ P. Then there is p ∈ Z and N ∈ N so that for all n ≥ N ,


logn ◦ Q ◦ expn ∼ expp . Equivalently, logn Q ∼ ln−p .

Proof. We will use the basic definition for logarithms. Let A = ceL (1+U ) be the canon-
ical multiplicative decomposition. P If A ∈ P, this means c > 0 and L is purely large and
positive. Then log A = L + log c + ∞ j=1 ((−1)
j+1 /j)U j . From this we get: If A, B ∈ P,

A ≍ B, then log A ∼ log B. Write R[p, N ] := { Q ∈ P : logn Q ∼ ln−p for all n ≥ N }.


(i) lm ∈ R[−m, 0].
(ii) Let A = ceL (1 + U ) ∈ P, then dom(log A) = dom L, where also dom L ∈ P and
(unless L has height 0) the height of dom L is less than the height of dom A = ceL . If
dom L ∈ R[p, N ] then A ∈ R[p + 1, N + 1].
(iii) Let A have height 0, so A ∼ clbm , c, b ∈ R, c > 0, b > 0. Then log A ∼ blm+1
and log2 A ∼ lm+2 , so A ∈ R[−m, 2].
These rules cover all P.

Remark 4.6. Alternate terminology: exponentiality = level. So Proposition 4.5 says


that the exponential ordered field R x is levelled.
Example 4.7.
2 −x
T ∼ 4(log x)2 xπ e5x
2 −x
(so that the dominant term of T is 4(log x)2 xπ e5x ), then

log ◦ T ◦ exp ∼ 5e2x − ex + πx + 2 log x + log 4 ∼ 5e2x ,


log2 ◦ T ◦ exp2 ∼ 2ex + log 5 ∼ 2ex ,
log3 ◦ T ◦ exp3 ∼ ex + log 2 ∼ ex ,
logk ◦ T ◦ expk ∼ ex , for all k ≥ 3,

so expo T = 1.

Proposition 4.8. If expo T = 0, then logk ◦ T ◦ expk is log-free for k large enough.

Proof. Prove recursively: Assume T = x + A, A ∈ R G•,M , M > 0, A ≺ x. Then


T ◦ exp = ex + A P ◦ exp = ex (1 + B) with B = (A/x) ◦ exp ∈ R G•,M −1 and
log ◦ T ◦ exp = x + ∞j=1 (−1)
j+1 B j /j has depth M − 1.

14
Simpler Proof Needed
Here is a simple fact. It needs a simple proof. It is true for functions, so it is surely
true for transseries as well. My overly-involved proof will be given in Section 8. In
fact, there are two propositions. Each can be deduced from the other:

Proposition 4.9. Let T ∈ T, S1 , S2 ∈ P, S1 < S2 . Then

T ′ > 0 =⇒ T ◦ S1 < T ◦ S2 ,
T ′ = 0 =⇒ T ◦ S1 = T ◦ S2 , (1)

T < 0 =⇒ T ◦ S1 > T ◦ S2 .

Proposition 4.10. Let A, B ∈ T, S1 , S2 ∈ P, A′ ≺ B ′ , S1 < S2 . Then

A ◦ S2 − A ◦ S1 ≺ B ◦ S2 − B ◦ S1 . (2)

Proof of 4.10 from 4.9. Since the theorem is unchanged when we replace B by −B,
we may assume B ′ > 0. We have A′ ≺ B ′ . Let c ∈ R. By Remark 1.6, B ′ > cA′ so
(B − cA)′ > 0. Therefore, by Proposition 4.9, (B − cA) ◦ S1 < (B − cA) ◦ S2 so

B ◦ S2 − B ◦ S1 > c A ◦ S2 − A ◦ S1 .

This is true for all c ∈ R, so we have B ◦ S2 − B ◦ S1 ≻ A ◦ S2 − A ◦ S1 .

Proof of 4.9 from 4.10. Let R be the set of all T ∈ T that satisfy (1) for all S1 , S2 ∈ P
with S1 < S2 . We claim QR satisfies the conditions of Corollary 3.3. Clearly 1, x ∈ R.
(b′′ ) Note l′m = 1/ m−1
j=0 lj > 0. If S1 < S2 , then by Proposition 4.3(c) we have
logm S1 < logm S2 .
(d′′ ) Assume supp T ⊆ R. If T = 0, the conclusion is clear. Assume T 6= 0. Let
ag = dom T , a ∈ R, g ∈ G. We may assume g 6= 1, since if g = 1, we may consider
T − ag instead. So T ′ ∼ ag′ . Write A = T − ag so that T = ag + A with A ≺ ag.
There will be cases based on the signs of a and g′ . Take the case a > 0, g′ > 0. So
g ◦ S1 < g ◦ S2 since g ∈ R. Now by Proposition 4.10,

ag ◦ S2 − ag ◦ S1 ≻ A ◦ S2 − A ◦ S1 ,

so T ◦ S2 − T ◦ S1 ∼ ag ◦ S2 − ag ◦ S1 > 0 and therefore T ◦ S2 − T ◦ S1 > 0. The other


three cases are similar.
(e′′ ) Let T = eL , where L ∈ R is purely large. Then T ′ = L′ eL , so T ′ has the same
sign as L′ . Thus L ◦ S1 < L ◦ S2 if T ′ > 0 and reversed if T ′ < 0. Apply Proposition
4.3(d) to get eL◦S1 < eL◦S2 or reversed, as required.

Remark 4.11. To prove either 4.10 or 4.9 outright seems to require more work than
the proofs found above. See Theorem 8.14.
Here is a special case of Proposition 4.10.

Proposition 4.12. If A ∈ T, S1 , S2 ∈ P, S1 < S2 , and A ≺ x, then


A ◦ S2 − A ◦ S1 ≺ S2 − S1 .

Proof. Note A′ ≺ x′ and apply Proposition 4.10.

15
Grid-Based Version
As we know, T ≺ S if and only if T ≺µ S for some finite set µ ⊂ Gsmall of generators.
So of course Proposition 4.12 needs a form in terms of ratio sets. It is found in [9,
Rem. 9.3]:
Proposition 4.13. Let µ be a ratio set. Let S1 , S2 ∈ P. Then there is a ratio set α
such that: For every A ∈ Tµ, if A ≺µ x, then A(S2 ) − A(S1 ) ≺α S2 − S1 .
Note that α depends on S1 and S2 , not just on a ratio set generating them. It is
apparently not possible to avoid this problem:
Question 4.14. Given a ratio set µ ⊂ Gsmall , is there α ⊇ µ such that: if A, S1 , S2 ∈ Tµ,
A ≺µ x, S1 , S2 ∈ P, and S1 < S2 , then A ◦ S2 − A ◦ S1 ≺α S2 − S1 ?
3 3
Example 4.15. Let µ = {x−1 , e−x }. Consider A = µ2 = e−x and Sa = µ−1
1 + aµ1 =
−1
x + ax for a ∈ R. Certainly A ≺ 1. Compute
µ

−1 )3 3 −3ax−3a2 x−1 −a3 x−3


A ◦ Sa = e−(x+ax = e−x
 

X
−x3 −3ax  (−3a2 x−1 − a3 x−3 )j .
=e
j!
j=0

3
The dominant term is the monomial e−x −3ax . As a ranges over R, these monomials
do not lie in any grid. Nor even in any well ordered set.
3 3
Now if a < b, then Sa < Sb and e−x −3ax ≻ e−x −3bx , so
3 −3ax
Sb − Sa = (b − a)x−1 , A ◦ Sb − A ◦ Sa ∼ −e−x .

Of course A ◦ Sb − A ◦ Sa ≺ Sb − Sa . But there is no finite α such that


A ◦ Sb − A ◦ Sa ≺α Sb − Sa for all a, b ranging over the reals.

Integral Notation
R
Notation 4.16. If A, B ∈ T and A′ = B, we may sometimes write A = B, but in
fact A is only determined by BRup to a constant summand. The large part of A is
S
determined by B. We also write S12 B := A(S2 )− A(S1 ), which is uniquely determined
by B, and is defined for S1 , S2 ∈ P, S1 < S2 .
Of course, with this definition, any statement about integrals is equivalent to a
statement about derivatives. Propositions 4.9 or 4.10 lead to the following.
Corollary 4.17. Let A, B ∈ T, S1 , S2 ∈ P, S1 < S2 . Then
Z S2
B > 0 =⇒ B > 0,
S1
Z S2
B = 0 =⇒ B = 0,
S1
Z S2
B < 0 =⇒ B < 0.
S1
Z S2 Z S2
A > B =⇒ A> B,
S1 S1

16
Z S2 Z S2
A = B =⇒ A= B,
S1 S1
Z S2 Z S2
A < B =⇒ A< B.
S1 S1

Remark 1.6 lets us prove formulas about ≺ from formulas about <. Here are some
examples.
Proposition 4.18. If A, B ∈ T, A, B nonzero, S1 , S2 ∈ P, S1 < S2 , then
Z S2 Z S2
A ≻ B =⇒ A≻ B,
S1 S1
Z S2 Z S2
A ≺ B =⇒ A≺ B,
S1 S1
Z S2 Z S2
A ≍ B =⇒ A≍ B,
S1 S1
Z S2 Z S2
A ∼ B =⇒ A∼ B.
S1 S1

Compositional Inverse
Now using Proposition 4.12 we get a nice proof for the existence of inverses under
composition. (For the well-based case.) See also [7, Cor. 6.25].
Proposition 4.19. Let T = x + A, A ≺ x, supp A ⊂ GN . Then T has an inverse S
under composition, S = x + B, B ≺ x, supp B ⊂ GN .
Proof. Let the function Φ be defined by Φ(S) = x − A ◦ S. Then Φ maps A :=
{ x + B : B ≺ x, supp B ⊆ GN } into itself [8, Prop. 3.98]. I claim Φ is contracting on
A. Indeed, if S1 , S2 ∈ S and S1 6= S2 , then
Φ(S2 ) − Φ(S1 ) = A ◦ S1 − A ◦ S2 ≺ S2 − S1
by Proposition 4.12.
Apply the fixed-point theorem [12, Thm. 4.7] (see Proposition 6.4, below) to get S
with S = Φ(S). Then
T ◦ S = S + A ◦ S = Φ(S) + A ◦ S = x.
As is well-known: if right inverses all exist, then they are full inverses. Review of
the proof: Suppose T ◦ S = x as found. Start with S and get a right-inverse T1 so
S ◦ T1 = x. Then T = T ◦ x = T ◦ (S ◦ T1 ) = (T ◦ S) ◦ T1 = x ◦ T1 = T1 .
Proposition 4.20. The set P is a group under composition.
Proof. Let T ∈ P. Let p = expo T , so that logk ◦T ◦ expk ∼ expp for large enough k.
Let T1 = logk ◦T ◦ expk−p , so that T1 ∼ x and (if k is large enough) T1 is log-free. By
Proposition 4.19 there is an inverse, say T1 ◦ S1 = x. Write S = expk−p S1 ◦ logk . Then
T ◦ S = expk ◦T1 ◦ logk−p ◦ expk−p ◦S1 ◦ logk = x.
Remark 4.21. We need a grid-based version of Proposition 4.12 to prove existence of a
grid-based compositional inverse using a grid-based fixed-point theorem. This is done
in [9, Sec. 8].

17
An Example Inverse
Consider the transseries S = log x + 1 + x−1 ∈ P. We want to discuss its compositional
inverse. According to the method above, we should compute the inverse of S1 =
[−1]
S ◦ exp = x + 1 + e−x ∈ P. And if T1 = S1 , then S [−1] = exp ◦T1 .
For the inverse of S1 = x + 1 + e , write A = 1 + e−x and solve by iteration
−x

Y = Φ(Y ), where Φ(Y ) = x − A ◦ Y = x − 1 − e−Y . We end up with

3e3 −3x 8e4 −4x


T1 = x − 1 − ee−x − e2 e−2x − e − e + ···
2 3

X
=x−1− aj e−jx
j=1

either by iteration, or with a linear equation for each aj in terms of the previous ones.
(And aj is rational times ej .) And then

1 x e 2e2 −2x 9e3 −3x 32e4 −4x


S [−1] = eT1 = e − 1 − e−x − e − e − e + ···
e 2 3 8 15

X
1
= ex − 1 − bj e−jx .
e
j=1

Compositional Equations
Because of the group property Proposition 4.20 (or the grid-based version [9, Sec. 8]),
we know: Let S, T ∈ T. If S, T are both large and positive, then there is a unique
Y ∈ P with S = T ◦ Y .

Proposition 4.22. Let S, T ∈ T. Then there is a unique Y ∈ P with S = T ◦ Y in


each of the following cases: S and T are both:
(a) large and positive
(b) small and positive
(c) large and negative
(d) small and negative
(e) For some c ∈ R, c 6= 0, S ∼ c, T ∼ c, S > c, T > c.
(f) For some c ∈ R, c 6= 0, S ∼ c, T ∼ c, S < c, T < c.
There is a nonunique Y ∈ P with S = T ◦ Y in case: for some c ∈ R, both S = c and
T = c. In all other cases, there is no Y with S = T ◦ Y .

Proof. (a) is from Proposition 4.20. (b) Apply (a) to 1/S and 1/T . (c) Apply (a) to
−S and −T . (d) Apply (b) to −S and −T . (e) Apply (b) to S − c and T − c. (f) Apply
(d) to S − c and T − c.
The concluding cases are clear.

18
Mean Value Theorem
Using Proposition 4.9, we get a MVT.
Proposition 4.23. Given A ∈ T, S1 , S2 ∈ P, S1 < S2 , there is S ∈ P so that
A ◦ S2 − A ◦ S1
= A′ ◦ S.
S2 − S1
Proof. Write B = (A ◦ S2 − A ◦ S1 )/(S2 − S1 ). We claim that Proposition 4.22 shows
that there is a solution S to B = A′ ◦ S. So we have to show that A′ , B are in the same
case of Proposition 4.22.
Let c ∈ R. If A′ > c, then (A − cx)′ > 0, and therefore by Proposition 4.9
(A−cx)◦S1 < (A−cx)◦S2 , so A◦S2 −A◦S1 > c(S2 −S1 ), so (A◦S2 −A◦S1 )/(S2 −S1 ) > c,
so B > c. Similarly: if A′ < c, then B < c. These hold for all real c, so in fact A′ and
B are in the same case.

The following proposition, too, has—so far—only an involved proof, which will not
be given here. See Section 5 for this and still more versions of the Mean Value Theorem.

Proposition 4.24. Let A ∈ T, S1 , S2 ∈ P. If A′′ > 0 and S1 < S2 , then


A ◦ S2 − A ◦ S1
A′ ◦ S1 < < A′ ◦ S2 .
S2 − S1
Using this, we can improve the Mean Value Theorem 4.23:
Proposition 4.25. Given A ∈ T, S1 , S2 ∈ P, S1 < S2 , there is S ∈ P, S1 < S < S2 so
that
A ◦ S2 − A ◦ S1
= A′ ◦ S.
S2 − S1
Proof. First assume A′′ > 0. Let S be as in Proposition 4.23. By Proposition 4.24,
A′ (S1 ) < A′ (S) < A′ (S2 ). So by Proposition 4.9 we conclude S1 < S < S2 .
The case A′′ < 0 is similar. The case A′′ = 0 is easy.

Intermediate Value Theorem


Proposition 4.26. Let K, T ∈ T, A, B ∈ P. Assume T (A) ≤ K ≤ T (B). Then there
is S ∈ P with T (S) = K and either A ≤ S ≤ B or A ≥ S ≥ B.
Proof. If T (A) = K, choose S = A; if T (B) = K, choose S = B. So we may assume
T (A) < K < T (B). We will consider cases for T .
(a) First assume T is large and positive. Then the inverse T [−1] exists in P. Also
T (A), T (B) are large and positive, so K, which is between them, is large and positive.
Define S = T [−1] (K). Of course T (S) = K. Since T [−1] is large and positive it
is increasing (by Proposition 4.9), so applying T [−1] to T (A) < K < T (B) we get
A < S < B.
(b) Assume T is large and negative. Apply case (a) to −T .
(c) Assume T is small and positive. Apply case (a) to 1/T .
(d) Assume T is small and negative. Apply case (c) to −T .
(e) Assume there is a ∈ R with T ∼ a, T > a. Apply case (c) to T − a.
(f) Assume there is a ∈ R with T ∼ a, T < a. Apply case (d) to T − a.

19
(g) The only case left is T = a for some a ∈ R, so T (A) = T (B) = a = K, and this
case was taken care of at the beginning of the proof. Or let S = (A + B)/2 to get S
strictly between A and B when A 6= B.

Remark 4.27. Using 4.26 we can deduce 4.25 from 4.24 without the need of 4.23. But
4.24 is still the difficult step.

5 Taylor’s Theorem
Here we will formulate many versions of Taylor’s Theorem. Unfortunately, proofs are
(as far as I know) still quite involved. Proofs (for most cases) will not be included here.
See [7, §6] for well-based transseries and [13, §5.3] for grid-based transseries. But in
some cases it may not be clear that they have proved everything listed here.
Recall definitions GN , GN,M , G• , etc. If A is a set of monomials, and S ∈ P, write
A ◦ S := { g ◦ S : g ∈ A }. Let U ∈ T, then we say U ≺ A if U ≺ g for all g ∈ A. Recall
that if g ∈ GN,M \ GN −1,M and g ≺ 1, then g ≺ GN −1,M .
Let T ∈ T, S1 , S2 ∈ P. For n ∈ N define
n−1
X T (k) (S1 )
∆n (T, S1 , S2 ) := T (S2 ) − (S2 − S1 )k .
k!
k=0

When S1 , S2 are understood, write ∆n (T ). The first few cases:

∆0 (T ) = T (S2 ),
∆1 (T ) = T (S2 ) − T (S1 ),
∆2 (T ) = T (S2 ) − T (S1 ) − T ′ (S1 ) · (S2 − S1 ),
1
∆3 (T ) = T (S2 ) − T (S1 ) − T ′ (S1 ) · (S2 − S1 ) − T ′′ (S1 ) · (S2 − S1 )2 .
2

Note that derivatives


P ∂ k are strongly additive, and therefore these
P ∆n are also.
That is: if S = i∈I Ai (in the asymptotic topology), then ∆n (S) = ∆n (Ai ).
Notation 5.1. Formulations.
[An ] Let T ∈ TN,M , T ∈
/ R, S1 , S2 ∈ P. If N = 0 assume S2 − S1 ≺ S1 . If N > 0
assume S2 − S1 ≺ GN −1,M ◦ S1 . Let n ∈ N. If T (n) 6= 0, then

T (n) (S1 )
∆n (T ) ∼ (S2 − S1 )n .
n!

[A∞ ] Let T ∈ TN,M , T ∈


/ R, S1 , S2 ∈ P. If N = 0 assume S2 − S1 ≺ S1 . If N > 0
assume S2 − S1 ≺ GN −1,M ◦ S1 . Then

X T (j) (S1 )
T (S2 ) = (S2 − S1 )j .
j!
j=0

20
[Bn ] Let T ∈ T, let S1 , S2 ∈ P, and let n ∈ N. If T (n+1) > 0 and S1 < S2 , then

T (n) (S1 ) T (n) (S2 )


(S2 − S1 )n < ∆n (T ) < (S2 − S1 )n .
n! n!
Other cases also: If T (n+1) < 0, reverse the inequalities. If S1 > S2 and n is even,
reverse the inequalities.
[Cn ] Let T ∈ T, let S1 , S2 ∈ P, and let n ∈ N. If T (n) > 0 and S1 < S2 , then
∆n (T ) > 0. Other cases also: If T (n) < 0, reverse the inequality. If S1 > S2 and
n is odd, reverse the inequality.
[Dn ] Let A, B ∈ T, let S1 , S2 ∈ P, and let n ∈ N. If A(n) ≺ B (n) then ∆n (A) ≺ ∆n (B).
Some beginning cases.
[A0 ] If (S2 − S1 ) is appropriately small, then T (S2 ) ∼ T (S1 ).
[A1 ] If (S2 −S1 ) is appropriately small, then T (S2 )−T (S1 ) ∼ T ′ (S1 )·(S2 −S1 ). Proved
in 7.1.
[B0 ] If T ′ > 0 and S1 < S2 , then T (S1 ) < T (S2 ) < T (S2 ). (Second inequality is too
strong.) This is 4.9, proved in 8.14.
[B1 ] If T ′′ > 0 and S1 6= S2 , then
T (S2 ) − T (S1 )
T ′ (S1 ) < < T ′ (S2 ).
S2 − S1
This is 4.24.
[C0 ] If T > 0, then T (S2 ) > 0. This is in 4.2.
[C1 ] If T ′ > 0 and S1 < S2 , then T (S2 ) − T (S1 ) > 0. This is 4.9 again.
[D0 ] If A ≺ B then A(S2 ) ≺ B(S2 ). This is in 4.2.
[D1 ] If A′ ≺ B ′ then A(S2 ) − A(S1 ) ≺ B(S2 ) − B(S1 ). This is 4.10, proof in 8.14.
A variant form of [Bn ] follows using the intermediate value theorem (a consequence
of [B1 ]).
[B′n ] Let T ∈ T, let S1 , S2 ∈ P, and let n ∈ N. If S1 6= S2 , then there exists Se strictly
between S1 and S2 such that

T (n) Se
∆n (T, S1 , S2 ) = (S2 − S1 )n .
n!

Good Proofs Needed—But What Methods?


A good exposition is needed for the proofs of the principles stated in 5.1. First steps
are seen below (Section 7 for [A1 ] and Section 8 for [C1 ] and [D1 ]). Now proofs for
[An ] and [A∞ ] should be possible along the same lines. But I think further proofs for
[Bn ], [Cn ], [Dn ] along those lines will be ugly or impossible. So a better approach is
needed. Even if proofs can, indeed, be found in the literature (such as [7, §6] and [13,
§5.3]), they are not as elementary as one might hope.
Related results could be expected from the same methods, perhaps. For example,
does the following follow from the principles listed above, or would it require additional
proof?

21
Let U, V ∈ T, S1 , S2 ∈ P. If U ′ > 0, V > 0, S1 < S2 , then
Z S2 Z S2 Z S2
U (S1 ) V < U V < U (S2 ) V.
S1 S1 S1

Or: There exists Se between S1 and S2 with


Z S2 Z S2

e
UV = U S V.
S1 S1

Equivalently: Let A, B ∈ T, S1 , S2 ∈ P with B ′ 6= 0 and S1 6= S2 . Then there exists Se


between S1 and S2 with 
A(S2 ) − A(S1 ) A′ Se
= .
B(S2 ) − B(S1 ) B′ Se
[Equivalence comes from writing B ′ = V , A′ = U V .]
One method used for proofs such as these (in conventional calculus) suggests that
we need to know about transseries of two variables in order to use the same proofs
in this setting. This remains to be properly defined and investigated.

6 Topology and Convergence


In [8, Def. 3.45] we defined only the “asymptotic topology” for T. But there are other
topologies or types of convergence. And none of them has all of the desirable properties.
The attractive topology is described by van der Hoeven [12]; I will use letter H
for it, Tγ −→
H
T . For our situation (with totally ordered valuation group G) it is also
the order topology for T and the topology arising from the valuation mag.
Definition 6.1. Let Tγ be a net in T and let T ∈ T. Then Tγ −→
H
T iff for every m ∈ G
there is γm such that for all γ ≥ γm we have T − Tγ ≺ m.
This is the convergence of a metric. Because every transseries has finite height,
there is a countable base for the H-neighborhoods of zero made up of the sets
o(1/ expm ) = { T ∈ T : T ≺ 1/ expm } for m = 0, 1, 2, · · · .
ex
Here, as usual, exp0 = x, exp1 = ex , exp2 = e , and so on.
Continuity: (The “ε–δ” type definition.) A function Ψ : T → T is H-continuous at
S0 ∈ T iff: for every m ∈ G there is n ∈ G so that for all S ∈ T, if S − S0 ≺ n then
Ψ(S) − Ψ(S0 ) ≺ m. We may write it like this: Ψ S0 + o(n) ⊆ Ψ(S0 ) + o(m).
The asymptotic topology I get from Costin [3]; I will use letter C for it, Tj −→ T . C

Recall the definition:


µ,m
Definition 6.2. Tj −→ T iff supp(Tj ) ⊆ Jµ,m for all j and supp(Tj − T ) is point-finite;
µ µ,m
Tj −→ T iff there exists m with Tj −→ T ;
µ
Tj −→C
T iff there exists µ with Tj −→ T ;
Sets Tµ,m = { T ∈ T : supp T ⊆ Jµ,m } are metrizable for −→
C
. The asymptotic
topology for all of R G = T is an inductive limit: open sets are easily described,
convergence (except for sequences) is not. A set U ⊆ T is C-open iff U ∩ Tµ,m is open
in Tµ,m (according to −→C
) for all µ and m.

22
Definition 6.3. Here is a similar convergence, applying to well-based transseries, but
which makes sense even for grid-based transseries.
A
Let A ⊆ G be well ordered. Tj −→ T iff supp(Tj ) ⊆ A for all j and supp(Tj − T )
is point-finite;
A
Tj −→
W
T iff there exists well ordered A ⊆ G with Tj −→ T .

Sets TA := { T ∈ T : supp T ⊆ A } are metrizable for −→


W
, since A is countable. As
before, the W-topology for all of T is an inductive limit: A set U ⊆ T is W-open iff
U ∩ TA is open in TA (according to −→W
) for all well ordered A.

Basics
The attractive topology is discrete on TN M = R GN M , the transseries of given
height and depth. Indeed, ifT ∈ TN M , then for n > N the set T + o(1/ expn ) is open
and TN M ∩ T + o(1/ expn ) = {T }. So a net contained in some TN M converges P∞ −jiff it
is eventually constant. The series representing T ∈ T (for example series j=0 x ) is
essentially never H-convergent—it is H-convergent only if it has all but finitely many
terms equal to 0.
For each m, the “coefficient” map T 7→ T [m] is continuous from (T, asymptotic) to
(R, discrete). Indeed, given m and T0 ∈ T, the function T [m] is constant on the coset
T0 + o(m). So it is better than continuous: it is locally constant.
The series representing T ∈ T is C-convergent to T . And W-convergent. Consider
the sequence x− log j , (j = 1, 2, · · · ). This set is well ordered but not grid-based. So
x− log j −→
W
0 but not x− log j −→
C
0.
Coefficient maps T [m] are C-continuous and W-continuous. I guess locally constant,
too, since sets of the form { T ∈ T : T [m] = a } are C-open and W-open.
The whole transline T is not metrizable for C or W. Let Tjk = x−j ekx . Then
according to C convergence,

lim Tjk = 0 for each k ∈ N.


j→∞

In a metric space, it would then be possible to choose j1 , j2 , j3 , · · · so that

lim Tjk k = 0.
k→∞

(For example, for each k choose jk so that the distance from Tjk k to 0 is < 1/k.) But
that is false for C or W.

Well-Based Pseudo Completeness


A system Tα ∈ T, where α ranges over the ordinals up to some limit ordinal λ, is called
a pseudo Cauchy sequence iff Tα − Tβ ≻ Tβ − Tγ for all α < β < γ < λ. And
T is a pseudo limit of Tα iff Tα − T ∼ Tα − Tα+1 for all α < λ. A space is called
pseudo complete if every pseudo Cauchy sequence has a pseudo limit. The well
based Hahn sequence spaces R[[M]] are pseudo complete. (Grid based spaces R M
are usually not pseudo complete. Instead there is a “geometric convergence” explained
in [9, Def. 3.15].) But the transseries field T, a proper subset of R[[G]], is not pseudo
complete.

23
A pseudo limit is not expected to be unique, but in our setting there is a distin-
guished pseudo limit. It is the limit (in the W topology) of Sβ , where Sβ is the longest
common truncation of { Tα : α ≥ β }. See the “stationary limit” in [12].
Here is a well-based fixed point theorem from van der Hoeven [12, Thm. 4.7]. Note
that in our case where M is totally ordered, the special ordering ≺· coincides with
the usual ordering ≺ .
Proposition 6.4. Let Φ : R[[M]] → R[[M]]. Assume for all T1 , T2 ∈ R[[M]], if T1 6= T2 ,
then Φ(T1 ) − Φ(T2 ) ≺ T1 − T2 . Then there is a unique S ∈ R[[M]] such that Φ(S) = S.
Proof. Uniqueness. Assume Φ(S1 ) = S1 and Φ(S2 ) = S2 . If S1 6= S2 , then Φ(S1 ) −
Φ(S2 ) = S1 − S2 6≺ S1 − S2 , a contradiction. So S1 = S2 .
Existence (outline). Choose any nonzero T0 ∈ R[[M]]. For ordinals α we define Tα
recursively. Assume Tα has been defined. Consdier two cases. If Φ(Tα ) = Tα , then
S = Tα is the required result. Otherwise, let Tα+1 = Φ(Tα ). If λ is a limit ordinal,
and Tα has been defined for all α < λ, then (recursively) Tα is pseudo Cauchy, so let
Tλ be a pseudo limit of (Tα )α<λ . Eventually the process must end because there are
more ordinals than elements of R[[M]].

Example. Consider Q = x+log x+log2 x+log3 x+· · · . The partial sums constitute
a pseudo Cauchy sequence in T, but the pseudo limits (such as Q itself) in R[[G]] are
not in T. This Q is the solution of Φ(Y ) = Y where Φ(Y ) = x + (Y ◦ log) is contracting
on R[[G]].

Addition
Addition (S, T ) 7→ S + T is H-continuous. Given m ∈ G, we have
  
S + o(m) + T + o(m) ⊆ S + T + o(m).

Addition is C-continuous. Assume Sj −→


C
S, Tj −→
C
T. There is A = Jµ,m with
S, T, Sj , Tj ∈ TA. If g ∈ A, then for all but finitely many j we have Sj [g] = S[g]
and Tj [g] = T [g], so that (Sj + Tj )[g] = (S + T )[g]. Thus Sj + Tj −→
C
S + T . Addition
is W-continuous: same proof, except that A is merely required to be well ordered.

Multiplication
Multiplication (S, T ) 7→ ST is H-continuous. We have
  
S + o(m) T + o(n) ⊆ ST + o (mag S)n + (mag T )m + mn ,
 
so given S, T ∈ T and g ∈ G, there exist m, n ∈ G with S + o(m) T + o(n) ⊆
ST + o(g).
Multiplication is C-continuous [8, Prop. 3.48]. Let Si −→
C
S, Ti −→
C
T . There exist
µ,m µ,m e
µ, m so that Si −→ S and Ti −→ T . Then there exist µ e, m
e with Jµ,m · Jµ,m ⊆ Jµ̃,m .
e
(In fact we may take µ e = µ and m e = 2m.) Now given any g ∈ J µ̃, m , there are finitely
many pairs (m, n) ∈ Jµ,m × Jµ,m with mn = g. For each such m or n, except for finitely
many indices i we have Si [m] = S[m] and Ti [n] = T [n]. So, except for i in a finite union
of finite sets we have (Si Ti )[g] = (ST )[g]. Therefore Si Ti −→
C
ST .

24
Multiplication is W-continuous. This will be similar to C-continuity. We need to
use [8, Prop. 3.27]: Given any well ordered A ⊆ G, the set A · A is well ordered, and
for any g ∈ A · A, there are finitely many pairs (m, n) ∈ A × A with mn = g.

Differentiation
First note ′
T + o(n) ⊆ T ′ + o(n′ ) provided n 6= 1.
Given any m ∈ G, there is S ∈ T with S ′ = m by [8, Prop. 4.29]. We may assume the
constant term of S is zero. So let n = mag(S), and then n′ ∼ S ′ = m so
′
T + o(n) ⊆ T ′ + o(m).

In fact, since n did not depend on T , we have shown that differentiation is H-uniformly
continuous.
Now consider C-continuity.
From [8, Prop. 3.76] or [9, Prop. 4.7]: Given µ, m, there exist µ e, m
e so that if
µ,m e
µ̃,m
T ∈ Tµ,m then T ′ ∈ Tµ̃,m
e
and if Tj ∈ Tµ,m with Tj −→ T , then Tj′ −→ T ′ .
W-continuity probably needs a proof like [8, Prop. 3.76].
The derivative is computed as H-limit: From 5.1[A2 ] we have: for U ≺ GN −1,M ◦ S,

T (S + U ) − T (S) T ′′ (S)U
− T ′ (S) ∼ ,
U 2
so in the H-topology
T (S + U ) − T (S)
T ′ (S) = lim .
U →0 U

Integration
Integration is continuous? This should be investigated.

Composition (Left)
For a fixed (large positive) S, consider the composition function T 7→ T ◦ S.
If Ti −→
C
T , then Ti ◦ S −→
C
T ◦ S [8, Prop. 3.99], which depends on [8, Prop. 3.95].
For W-continuity we need a proof like [8, Prop. 3.95].
Now consider H-continuity. Note

T + o(n) ◦ S ⊆ (T ◦ S) + o(n ◦ S).

So we need: Given m ∈ G, there is n such that n ◦ S 4 m. So we would have H-uniform


continuity. Certainly this is true, since we can take n = 1/ expN for large enough N .
But what about a less drastic solution? [−1] . Or if we insist that
[−1]
 Of course: n = m ◦ S
n be a monomial, n = mag m ◦ S .

25
Composition (Right)
What about continuity of composition T ◦S as a function of the right composand S? It
is certainly false for C and W convergence. Indeed, let T = ex . Then to compute even
one term of eS we need to know all of the large terms of S; there could be infinitely
many large terms.
Now consider H-continuity.
Proposition 6.5. (i) Function exp is H-continuous on T. (ii) Function log is H-
continuous on (the positive subset of ) T. (iii) Let T ∈ T. Then function S 7→ T ◦ S is
H-continuous on P.
Proof. (i) Let S0 ∈ T and m ∈ G be given. Let
(
m mag(e−S0 ), if m mag(e−S0 ) 4 1,
n=
1, otherwise.
Now if s := S − S0 ≺ n, we have s ≺ 1 so es − 1 ∼ s ≺ n. And
eS − eS0 = eS0 (eS−S0 − 1) ≺ eS0 n 4 m.
That is: if S ∈ S0 + o(n), then eS ∈ eS0 + o(m). This shows that exp is H-continuous
at S0 .
(ii) Let S0 > 0 and m ∈ G be given. Then take
(
m mag S0 , if m 4 1,
n=
mag S0 , otherwise.
Now assume S − S0 ≺ n. Then
S − S0 n
≺ 41
S0 mag S0
so
 
S S − S0 S − S0 n
log(S) − log(S0 ) = log = log 1 + ∼ ≺ 4 m.
S0 S0 S0 mag S0
(iii) We will apply Corollary 3.2. Let R be the set of all T ∈ T such that the
function S 7→ T ◦ S is H-continuous. We now check the conditions of Corollary 3.2. If
g ∈ R, then g ◦ log ∈ R by (ii); this proves (f ′ ). If L ∈ R, then eL ∈ R by (i). And
xb = eb log x ∈ R by (i) and (ii). So xb eL ∈ R. This proves (e′ ).
Finally we must prove (d′ ). Let T ∈ T and assume supp T ⊆ R. (If T = 0 we have
T ∈ R trivially, so assume T 6= 0.) Let g0 = mag T , so g0 ∈ R. Note that T /g0 ≍ 1 ≺ x.
By Proposition 4.12 we have
T T
◦ S2 − ◦ S1 ≺ S2 − S1 ,
g0 g0
so S 7→ (T /g0 ) ◦ S is (uniformly) H-continuous. By hypothesis, S 7→ g0 ◦ S is H-
continuous. So (since multiplication is H-continuous) it follows that the product
   
T
S 7→ ◦ S · g0 ◦ S = T ◦ S
g0
is H-continuous.
So we may conclude R = T as required.

26
Fixed Point
Fixed point with parameter: conditions on Φ(S, T ) beyond “contractive in S for each
T ” so that if S = ST solves S = Φ(S, T ), then T 7→ ST is a continuous function of T .
Compare [12]. This should be investigated for all three topologies.

7 Proof for the Simplest Taylor Theorem


I said in Section 5 that proofs for Taylor’s Theorem are quite involved. Here I include
a proof for the simplest one, namely 5.1[A1 ].

Proposition 7.1. Let T ∈ TN,M , T 6∈ R, S ∈ P, U ∈ T. If N = 0, assume U ≺ S. If


N > 0, assume U ≺ GN −1,M ◦ S. Then

T (S + U ) − T (S) ∼ T ′ (S) · U. (†)

Proof. For N, M ∈ N, let A(N, M ) mean that the statement of the theorem holds for
all T ∈ GN,M , and let B(N, M ) mean that the the statement of the theorem holds for
all T ∈ TN,M . Note for any N, M ∈ N, from U ≺ GN,M ◦ S it follows that U ≺ S:
Indeed, 1 ∈ GN,M , so U ≺ 1 ≺ S.
(1) Claim: Let S ∈ P, U ∈ T, and assume U ≺ S. Then

U
log(S + U ) − log(S) ∼ . († log)
S
Indeed, U/S ≺ 1, so by the Maclaurin series for log(1 + z) we get
    
U U
log(S + U ) = log S 1 + = log(S) + log 1 +
S S

X (−1)j U  j  
U U
= log(S) − = log(S) + + o .
j S S S
j=1

(2) A(0, 0): Let b ∈ R, b 6= 0, S ∈ P, U ∈ T, and assume U ≺ S. Then

(S + U )b − S b ∼ bS b−1 · U. (†G0 )

Now U/S ≺ 1, so by Newton’s binomial series we get


 b ∞    j
X
b b U b b U
(S + U ) = S 1+ =S
S j S
j=0
    
U U
= Sb 1 + b +o = S b + bS b−1 · U + o S b−1 · U .
S S

Note that even if b = 0 the equation (S + U )b = S b + bS b−1 U + o(S b−1 U ) remains true.
(3) B(0, 0): Let T ∈ T0 , T 6∈ R, S ∈ P, U ∈ T, and assume U ≺ S. Then (†).
Let dom T = a0 xb0 . First consider the case b0 6= 0. Then T ′ ∼ a0 b0 xb0 −1 and

a0 (S + U )b0 − a0 S b0 = a0 b0 S b0 −1 · U + o(S b0 −1 · U ) = T ′ (S) · U + o(T ′ (S) · U ).

27
For any other term axb of T , we have b < b0 and

a(S + U )b − aS b = abS b−1 · U + o(S b−1 · U ) = o(S b0 −1 · U ) = o(T ′ (S) · U ).

Summing all the terms of T , we get

T (S + U ) − T (S) = T ′ (S) · U + o(T ′ (S) · U ).

Now take the case b0 = 0. Subtract the dominance: T1 = T − a0 . Since we assumed


T 6∈ R, it follows that T1 6= 0. Also T ′ = T1′ . Applying the previous case to T1 , we get

T (S + U ) − T (S) = 0 + T1 (S + U ) − T1 (S) = T1′ (S) · U + o(T1′ (S) · U )


= T ′ (S) · U + o(T ′ (S) · U ).

(4) Let N ≥ 0. Claim: If B(N, 0), then A(N + 1, 0).


Assume B(N, 0). Let T ∈ GN +1 , T 6= 1. Then T = eL , where L 6= 0 is purely
large in R GN ∪ {log x} . Let S ∈ P, and let U ∈ T with U ≺ GN ◦ S. Now in
particular, U ≺ GN −1 ◦ S if N > 0 or U ≺ S if N = 0, so L(S + U ) − L(S) ∼ L′ (S) · U .
But also L′ ∈ TN [noting that (log x)′ = 1/x ∈ TN ] and L′ 6= 0, so 1/L′ ∈ TN and
thus mag(1/L′ ) ∈ GN . From the assumption U ≺ GN ◦ S we get U ≺ 1/L′ (S), so
L′ (S) · U ≺ 1. So
U1 := L(S + U ) − L(S) ∼ L′ (S) · U ≺ 1.
Therefore we may use the Maclaurin series for ez to expand:

T (S + U ) − T (S) = eL(S+U ) − eL(S) = (eU1 − 1)eL(S) = U1 + o(U1 ) eL(S)

= L′ (S) · U + o(L′ (S) · U ) eL(S) = T ′ (S) · U + o(T ′ (S) · U ).

(5) Let N ≥ 1. Claim: If A(N, 0) then B(N, 0).


Same argument as (3).
(6) Let M ∈ N. Claim: If B(0, M ) then B(0, M + 1).
Assume B(0, M ). Let T ∈ T0,M +1 , T 6∈ R, S ∈ P, U ∈ T, and assume U ≺ S. Then
T = T1 ◦ log, with T1 ∈ T0,M , and T ′ (x) = T1′ (log x)/x. Now by (1),

U
U1 := log(S + U ) − log(S) ∼ ≺ 1 ≺ log S.
S
Now applying B(0, M ) to T1 , S1 = log S, U1 , we get

T (S) − T (S + U ) = T1 (log(S + U )) − T1 (log S) = T1 (log S + U1 ) − T1 (log S)


= T1 (S1 + U1 ) − T1 (S1 ) ∼ T1′ (S1 ) · U1
∼ T1′ (log S) · U/S = T ′ (S) · U.

(7) Let N, M ∈ N, N > 0. Claim: If B(N, M ) then B(N, M + 1).


Assume B(N, M ). Let T ∈ TN,M +1 , T 6∈ R, S ∈ P, U ∈ T, and assume U ≺
GN −1,M +1 ◦ S. Then T = T1 ◦ log, with T1 ∈ TN,M , and T ′ (x) = T1′ (log x)/x. Now for
any N, M we have U ≺ S, so by (1),

U
U1 := log(S + U ) − log(S) ∼ ≺ 1.
S

28
Now if we write S1 = log S, then
U
U1 ∼ ≺ U ≺ GN −1,M +1 ◦ S = GN −1,M ◦ S1 .
S
Applying B(N, M ) to T1 , S1 , U1 , we get

T (S) − T (S + U ) = T1 (log(S + U )) − T1 (log S) = T1 (log S + U1 ) − T1 (log S)


= T1 (S1 + U1 ) − T1 (S1 ) ∼ T1′ (S1 ) · U1
∼ T1′ (log S) · U/S = T ′ (S) · U.

(8) By induction we have: B(N, M ) for all N, M .

The other cases 5.1[An ] and [A∞ ] would be proved in the same way. See [7,
Sect. 6.8], [13, Prop. 5.11]. The argument will perhaps use the formula for the jth
derivative of a composite function.
The condition U ≺ GN −1,M ◦ S comes from [7, Sect. 6.8]. In [13, Prop. 5.11] we can
see that in fact we do not need to use all of GN −1,M ; in the notation of [9, Def. 7.1], it
suffices that U ≺ (1/m) ◦ S for all m ∈ tsupp T .

8 Proof for Propositions 4.9 and 4.10


Definition 8.1. Let R ⊆ T. We say R satisfies C iff for all T ∈ R and all S1 , S2 ∈ P
with S1 < S2 ,

T ′ > 0 =⇒ T ◦ S1 < T ◦ S2 ,
T ′ = 0 =⇒ T ◦ S1 = T ◦ S2 ,
T ′ < 0 =⇒ T ◦ S1 > T ◦ S2 .

We say R satisfies D iff for all A, B ∈ R, and all S1 , S2 ∈ P with S1 < S2 , if A′ ≺ B ′ ,


then
A ◦ S2 − A ◦ S1 ≺ B ◦ S2 − B ◦ S1 .
So Proposition 4.9 says T satisfies C and Proposition 4.10 says T satisfies D. These
are what I attempt to prove next. We will use notation TA = { T ∈ T : supp T ⊆ A }.
Remark 8.2. Let R ⊆ T. R satisfies C iff {T } satisfies C for all T ∈ R. R satisfies D
iff {A, B} satisfies D for all A, B ∈ R. If R satisfies C, then R ∪ {1} satisfies C. If R
satisfies D, then R ∪ {1} satisfies D.

Lemma 8.3. Let A ⊆ G. If A satisfies D, then TA satisfies D.

Proof. Assume A satisfies D. We may assume 1 ∈ A. Let A, B ∈ TA with A′ ≺ B ′ and


let S1 , S2 ∈ P with S1 < S2 . If B is replaced by B − c and/or A is replaced by A − c,
then both the hypothesis A′ ≺ B ′ and the conclusion A ◦ S2 − A ◦ S1 ≺ B ◦ S2 − B ◦ S1
are unchanged. So we may assume A, B have no constant terms. This means A ≺ B.
Let dom B = a0 g0 , a0 ∈ R, a0 6= 0, g0 ∈ A. Then all terms of A and all terms of B
except for the single term a0 g0 are ≺ g0 . Let ag be such a term, a ∈ R, g ∈ A. Since
A satisfies D,
g ◦ S 2 − g ◦ S 1 ≺ g0 ◦ S 2 − g0 ◦ S 1

29
so
ag ◦ S2 − ag ◦ S1 ≺ g0 ◦ S2 − g0 ◦ S1 . (1)
Summing (1) over all terms of A, we get

A ◦ S 2 − A ◦ S 1 ≺ g0 ◦ S 2 − g0 ◦ S 1 .

Summing (1) over all terms of B except the dominant term, we get

B ◦ S 2 − B ◦ S 1 ≍ g0 ◦ S 2 − g0 ◦ S 1 .

Therefore, A ◦ S2 − A ◦ S1 ≺ B ◦ S2 − B ◦ S1 , as required.

Lemma 8.4. Let A ⊆ G. If A satisfies C and D, then TA satisfies C.

Proof. Assume A satisfies C and D. We may assume 1 ∈ A. Let T ∈ TA and let


S1 , S2 ∈ P with S1 < S2 . Since we may replace T by T − c, we may assume T has no
constant term. Let dom T = a0 g0 . Then T ′ ∼ a0 g′0 , g′0 6= 0, so T ′ has the same sign as
a0 g′0 . We may replace T by −T , so it suffices to consider the case T ′ > 0. Now g0 ∈ A,
which satisfies C, so a0 g0 ◦ S1 < a0 g0 ◦ S2 . For all terms ag of T other than a0 g0 , we
have ag ◦ S2 − ag ◦ S1 ≺ a0 g0 ◦ S2 − a0 g0 ◦ S1 since A satisfies D. Summing these terms,
we get T ◦ S2 − T ◦ S1 ∼ a0 g0 ◦ S2 − a0 g0 ◦ S1 > 0, so T ◦ S2 − T ◦ S1 > 0 as required.

Lemma 8.5. G0 ∪ {log x} satisfies C.

Proof. This is Proposition 4.3 (a)(b)(c).

Lemma 8.6. Let A ⊆ G. If A satisfies C, then A ∪ {log} satisfies C

Proof. As noted in Lemma 8.5, {log} satisfies C. Apply Remark 8.2.

Lemma 8.7. G0 ∪ {log x} satisfies D.

Proof. Let A, B ∈ G0 ∪ {log x} with A′ ≺ B ′ and let S1 , S2 ∈ P with S1 < S2 . [Since


B = 1 is impossible and A = 1 is clear, assume both are not 1.] First consider
A = xa , B = xb , so A′ ≺ B ′ means a < b. We must show S2a − S1a ≺ S2b − S1b . Write
S2 = S1 + U , U > 0, and consider three cases: U ≺ S1 , U ≍ S1 , U ≻ S1 .
Case U ≺ S1 . Then U/S1 ≺ 1 and
" b #   
U bU
b b
S2 − S1 = S1 b
1+ b
− 1 ∼ S1 1 + − 1 = bS1b−1 U ≍ S1b−1 U.
S1 S1

So S2b − S1b ≍ S1b−1 U ≻ S1a−1 U ≍ S2a − S1a .


Case U ≍ S1 . Say U/S1 ∼ c, c ∈ R, c > 0. Note (1 + c)b − 1 is a nonzero constant,
so " b #
U h i
S2b − S1b = S1b 1+ − 1 ∼ S1b (1 + c)b − 1 ≍ S1b .
S1

So S2b − S1b ≍ S1b ≻ S1a ≍ S2a − S1a .

30
Case U ≻ S1 . Then S2 = S1 + U ∼ U ≻ S1 . If b > 0, then S1b ≺ S2b , so S2b − S1b ∼ S2b .
But if b < 0, then S1b ≻ S2b , so S2b − S1b ∼ −S1b . So we may compute:
if b > a > 0, then S2b − Sb1 ∼ S2b ≻ S2a ∼ S2a − S1a ,
if b > 0 > a, then S2b − S1b ∼ S2b ≻ 1 ≻ S1a ∼ S1a − S2a ,
if 0 > b > a, then S1b − S2b ∼ S1b ≻ S1a ∼ S1a − S2a .
This completes the proof for xa ≺ xb . The computations for log x ≺ xb or xa ≺ log x
are next.
Case U ≺ S1 . Then
S2 S1 + U U S2 U
= =1+ , log(S2 ) − log(S1 ) = log ∼ .
S1 S1 S1 S1 S1
If b > 0 then S2b − S1b ≍ S1b−1 U ≻ U/S1 ∼ log(S2 ) − log(S1 ). And if a < 0 then
S1a − S2a ≍ S1a−1 U ≺ U/S1 ∼ log(S2 ) − log(S1 ).
Case U ≍ S1 . Then U/S1 ∼ c so
 
U
log(S2 ) − log(S1 ) = log 1 + ∼ log(1 + c) ≍ 1.
S1
If b > 0, then S2b − S1b ≍ S1b ≻ 1 ≍ log(S2 ) − log(S1 ). If a < 0, then S1a − S2a ≍ S1a ≺
1 ≍ log(S2 ) − log(S1 ).
Case U ≻ S1 . Then S2 /S1 ≻ 1 so log(S2 ) − log(S1 ) < log(S2 ). If b > 0, then
S2 − S1b ≍ S2b ≻ log(S2 ) < log(S2 ) − log(S1 ). If a < 0, then S1a − S2a ≍ S1a ≺ 1 4
b

log(S2 /S1 ) = log(S2 ) − log(S1 ).


Lemma 8.8. Suppose G0 ⊆ A ⊆ G• and A satisfies D. Then A ∪ {log x} satisfies D.
Proof. Let A satisfy D, where G0 ⊆ A ⊆ G• . Let a, b ∈ A ∪ {log x} with a′ ≺ b′ and
let S1 , S2 ∈ P with S1 < S2 . Since A already satisfies D, we are left only with the two
cases a = log x and b = log x. Suppose a = log x, so that b ≻ log x ≻ 1. Since b is
log-free, by [8, Prop. 3.71] there is a real constant c > 0 with xc ≺ b. But xc ∈ A, so
xc ◦S2 −xc ◦S1 ≺ b◦S2 −b◦S1 . By Lemma 8.7 we have log ◦S2 −log ◦S1 ≺ xc ◦S2 −xc ◦S1 .
Combining these, we get log ◦S2 − log ◦S1 ≺ b ◦ S2 − b ◦ S1 .
Consider the other case, b = log x. If a = 1, the conclusion is clear. If a ≺ log x
is log-free and not 1, then there is a real constant c < 0 with a ≺ xc . Then, as in
the previous case, we have xc ◦ S2 − xc ◦ S1 ≻ a ◦ S2 − a ◦ S1 and log ◦S2 − log ◦S1 ≻
xc ◦ S2 − xc ◦ S1 , so log ◦S2 − log ◦S1 ≻ a ◦ S2 − a ◦ S1 .
Lemma 8.9. Suppose G0 ⊆ A ⊆ G• . If A satisfies C and D, then
n o
e := xb eL : b ∈ R, L ∈ TA purely large
A

satisfies C.
Proof. First A ∪ {log} satisfies C by Lemma 8.6 and D by Lemma 8.8. Then TA∪{log}
satisfies C by Lemma 8.4.
e so g = eL with L ∈ TA∪{log} purely large and let S1 , S2 ∈ P with S1 < S2 .
Let g ∈ A,
Then g′ = L′ eL so g′ has the same sign as L′ . Take the case g′ > 0. Since L ∈ TA∪{log}
which satisfies C, we have L ◦ S1 < L ◦ S2 . Exponentiate to get g ◦ S1 < g ◦ S2 , as
required.
The case g′ < 0 is done in the same way.

31
Lemma 8.10. Assume TGN ∪{log} satisfies C and D. Let B, L ∈ TGN ∪{log} , with L
purely large, and a = eL ∈ GN +1 . Assume a ≺ 1 ≺ B. Let S1 , S2 ∈ P with S1 < S2 .
Then
B(S2 ) − B(S1 ) ≻ a(S1 ) − a(S2 ).

Proof. If L ∈ TGN−1 ∪{log} , then a ∈ GN , and this is known by D. So assume L 6∈


TGN−1 ∪{log} . So mag L ∈ GN \ GN −1 has exact height N . Since both hypothesis and
conclusion are unchanged when B is replaced by −B, we may assume B > 0. Then,
since B is large and positive, we also have B ′ > 0.
There are two cases, depending on the size of S2 − S1 .
Case 1. S2 − S1 6≺ GN ◦ S1 . Let V = (xeL /B ′ ) ◦ S1 . Then V > 0 and since B ′ ∈ TN
is log-free, and mag L has exact height N , by [8, Prop. 3.72] we have xeL /B ′ ≺ GN , so
V ≺ GN ◦ S1 . So 0 < V < S2 − S1 , S1 < S1 + V < S2 . Also B ′ (S1 ) · V = S1 eL(S1 ) ≻
eL(S1 ) . By C for B, we have B(S1 + V ) < B(S2 ) and thus

B(S2 ) − B(S1 ) > B(S1 + V ) − B(S1 ) ∼ B ′ (S1 ) · V = S1 eL(S1 )


≻ eL(S1 ) > eL(S1 ) − eL(S2 ) > 0.

So
B(S2 ) − B(S1 ) ≻ eL(S1 ) − eL(S2 ) = |a(S2 ) − a(S1 )|.
Case 2. S2 − S1 ≺ GN ◦ S1 . Now S2 − S1 ≺ GN −1 ◦ S1 , so by Proposition 7.1 we
have

B(S2 ) − B(S1 ) ∼ B ′ (S1 ) · (S2 − S1 ),


L(S2 ) − L(S1 ) ∼ L′ (S1 ) · (S2 − S1 ).

But L ∈ TGN ∪{log} , so L′ ∈ TN , so mag(1/L′ ) ∈ GN , and thus S2 − S1 ≺ 1/L′ (S1 ) so

U := L(S1 ) − L(S2 ) ∼ L′ (S1 ) · (S1 − S2 ) ≺ 1.

Expand using the Maclaurin series for ez :

a(S1 ) − a(S2 ) = eL(S1 ) (1 − e−U ) = eL(S1 ) (U + o(U ))


∼ −eL(S1 ) L′ (S1 ) · (S2 − S1 ) = −a′ (S1 ) · (S2 − S1 )
≺ B ′ (S1 ) · (S2 − S1 ) ∼ B(S2 ) − B(S1 ).

This completes the proof.

Lemma 8.11. Let N ∈ N. Suppose GN satisfies C and D. Then GN +1 satisfies D.

Proof. Since GN satisfies C and D, we have: GN ∪ {log} satisfies C by Lemma 8.6


and D by Lemma 8.8; and TGN ∪{log} satisfies C by Lemma 8.4 and D by Lemma 8.3.
Let a, b ∈ GN +1 with a′ ≺ b′ and let S1 , S2 ∈ P with S1 < S2 . Since b = 1 is
impossible and a = 1 is easy, assume they are not 1; so a ≺ b. Note log b ∈ TGN ∪{log}
is purely large and nonzero, hence large.
Let m = a/b so that m ≺ 1, and thus m(S1 ) ≺ 1, m(S2 ) ≺ 1.
I claim that
m(S2 ) − m(S1 )
b(S1 ) ≺ 1. (2)
b(S2 ) − b(S1 )

32
We will prove this in cases.
Case 1: b(S1 ) ≻ b(S2 ). Then b(S1 ) − b(S2 ) ∼ b(S1 ), so
m(S2 ) − m(S1 )
b(S1 ) ∼ m(S1 ) − m(S2 ) ≺ 1,
b(S2 ) − b(S1 )
as claimed.
Case 2: b(S1 ) 4 b(S2 ). If b(S2 ) > b(S1 ), then apply Lemma 8.10 [to m ≺ 1 ≺ log b]
to get
m(S2 ) − m(S1 ) log b(S2 ) − log b(S1 )
b(S1 ) ≺ b(S1 )
b(S2 ) − b(S1 ) b(S2 ) − b(S1 )

log b(S2 )/b(S1 )
= b(S1 )
b(S2 ) − b(S1 )

b(S2 )/b(S1 ) − 1
< b(S1 ) = 1.
b(S2 ) − b(S1 )
On the other hand, if b(S2 ) < b(S1 ), then again apply Lemma 8.10 [to m ≺ 1 ≺ log b]
to get
m(S1 ) − m(S2 ) log b(S1 ) − log b(S2 )
b(S1 ) ≺ b(S1 )
b(S1 ) − b(S2 ) b(S1 ) − b(S2 )

log b(S1 )/b(S2 )
= b(S1 )
b(S1 ) − b(S2 )

b(S1 )/b(S2 ) − 1 b(S1 )
< b(S1 ) = 4 1.
b(S1 ) − b(S2 ) b(S2 )
So in both cases, we have established (2).
Now compute
a(S2 ) − a(S1 ) = b(S2 )m(S2 ) − b(S1 )m(S1 )
 
 m(S2 ) − m(S1 )
= b(S2 ) − b(S1 ) m(S2 ) + b(S1 )
b(S2 ) − b(S1 )
≺ b(S2 ) − b(S1 ).
The final step uses (2) together with m(S2 ) ≺ 1.
Proposition 8.12. T• = R G• satisfies C and D.
Proof. By Lemmas 8.5 and 8.7 G0 satisfies C and D. Applying Lemmas 8.9 and 8.11
inductively,
S we conclude that GN satisfies C and D for all N ∈ N. And therefore
G• = N GN satisfies C and D by Remark 8.2. Finally T• satisfies C and D by
Lemmas 8.3 and 8.4.
Proposition 8.13. Let R ⊆ T and define R e := { T ◦ log : T ∈ R }. If R satisfies C,
e e
then R satisfies C. If R satisfies D, then R satisfies D.
Proof. Assume R satisfies C. Let Q ∈ R, e so that Q = T ◦ log with T ∈ R. Note
′ ′ ′ ′
Q = (T ◦ log)/x, so that T and Q have the same sign. Let S1 , S2 ∈ P with S1 < S2 .
Then log(S1 ), log(S2 ) ∈ P with log(S1 ) < log(S2 ). Now if T ′ > 0, then applying
property C of R to log(S1 ) and log(S2 ), we get T (log(S1 )) < T (log(S2 )). That is:
Q(S1 ) < Q(S2 ). The case T ′ = 0 and T ′ < 0 are similar.
The proof for D is done in the same way.

33
Theorem 8.14. The whole transline T satisfies C and D.

9 Further Transseries
Suppose we allow well-based transseries, but do not end in ω steps. Begin as in Def-
inition 2.1. Write Wω = W•,• , where ω is the first infinite ordinal. Then proceed by
transfinite recursion:
 L If α is an ordinal and Gα has been defined, let Tα = R[[Gα ]]
and Wα+1 = e : L ∈ Tα is purely large . If λ is a limit ordinal and Wα have been
defined for all α < λ, let [
Wλ = Wα .
α<λ

See [17, §2.3.4]. Does it exist elsewhere, as well?


Call the elements of Wα Schmeling transmonomials and the elements of Tα
Schmeling transseries. This will allow such transseries as

H := log x + log log x + log log log x + · · ·

and such monomials as


1
G := e−H = .
x log x log log x log log log x · · ·

(In the notation of [17, §2.3], H ∈ L and G ∈ Lexp .) This G is interesting (as those
who have thought about convergence
R and divergence of series will know) because: for
actual transseries T , we have T ≻ 1 if and only if T ≻ G. That is, for S ∈ T we have:
if S ≻ 1 then S ′ ≻ G; if S ≺ 1 then S ′ ≺ G. R
So what happens if we attempt to investigate G if possible? It seems that there
is no Schmeling transseries S with S ′ = G.

Iterated Log of Iterated Exp


A Usenet sci.math discussion in July, 2009, suggested investigation of growth rate of
a function Y with Y = log(Y (eax )) for a fixed constant a (there it was log 3). This Y
should be a limit of the sequence:

Y0 = x,
Y1 = log(eax ),
ax 
Y2 = log log eae ,
   
aeax
Y3 = log log log e ae ,

and so on. Iteration of transseries suggests a solution Y not of finite height. It seems
Y should begin

log(a) −ax 1 log(a)2 −2ax 1 log(a)3 −3ax


Y = ax + log(a) + e − e + e
a 2 a2 3 a3
1 log(a)4 −4ax 1 log(a)5 −5ax 1 log(a)6 −6ax
− e + e − e + ···
4 a4 5 a5 6 a6

34
and so on; order-type ω. Writing µ1 for e−ax , these terms have coefficient times powers
of µ1 . Beyond all of those, we have terms involving µ2 = exp(−a exp(ax)), beginning

µ2 log(a)µ1 − log(a)2 µ21 + log(a)3 µ31 − log(a)4 µ41 + log(a)5 µ51 + · · ·
 log(a)2 log(a)3 − log(a)2 2 2 log(a)3 − log(a)4 3
+µ22 − µ1 + µ1 + µ1
2 2 2
log(a)5 − 3 log(a)4 4 4 log(a)5 − log(a)6 5 
+ µ1 + µ1 + · · · + · · ·
2 2
Order-type ω 2 . Beyond all those we have terms involving µ3 = exp(−a exp(a exp(ax)));
order-type ω 3 . And so on with µk of height k for k ∈ N.

Surreal Numbers
If this extension for well-based transseries is continued through all the ordinals, the
result is a large (proper class) real-closed ordered field. With additional operations.
J. H. Conway’s system of surreal numbers [2] is also a large (proper class) real-closed
ordered field, with additional operations. Any ordered field (with a set of elements,
not a proper class) can be embedded in either of these. We can build recursively
a correspondence between the well-based transseries and the surreal numbers. But
involving many arbitrary choices.
[13, p. 16] Is there a canonical correspondence, not only preserving the ordered
field structure, but also some of the additional operations? Or is there a canonical
embedding of one into the other? Perhaps we need to take the recursive way in which
one of these systems is built up and find a natural way to imitate it in the other system.
Reals should correspond to reals. The transseries x should correspond to the surreal
number ω. But there are still many more details not determined just by these.

References
[1] M. Aschenbrenner, L. van den Dries, Asymptotic differential algebra. In [6],
pp. 49–85
[2] J. H. Conway, On numbers and games. Second edition. A K Peters, Natick, MA,
2001
[3] O. Costin, Topological construction of transseries and introduction to generalized
Borel summability. In [6], pp. 137–175
[4] O. Costin, Global reconstruction of analytic functions from local expansions and
a new general method of converting sums into integrals. preprint, 2007.
http://arxiv.org/abs/math/0612121
[5] O. Costin, Asymptotics and Borel Summability. CRC Press, London, 2009
[6] O. Costin, M. D. Kruskal, A. Macintyre (eds.), Analyzable Functions and Appli-
cations (Contemp. Math. 373). Amer. Math. Soc., Providence RI, 2005
[7] L. van den Dries, A. Macintyre, D. Marker, Logarithmic-exponential series. Annals
of Pure and Applied Logic 111 (2001) 61–113

35
[8] G. Edgar, Transseries for beginners. preprint, 2009.
http://arxiv.org/abs/0801.4877 or
http://www.math.ohio-state.edu/∼edgar/preprints/trans begin/
[9] G. Edgar, Transseries: ratios, grids, and witnesses. forthcoming
http://www.math.ohio-state.edu/∼edgar/preprints/trans wit/
[10] G. Edgar, Fractional iteration of series and transseries. preprint, 2009.
http://www.math.ohio-state.edu/∼edgar/preprints/trans frac/
[11] G. Higman, Ordering by divisibility in abstract algebras. Proc. London Math. Soc.
2 (1952) 326–336
[12] J. van der Hoeven, Operators on generalized power series. Illinois J. Math. 45
(2001) 1161–1190
[13] J. van der Hoeven, Transseries and Real Differential Algebra (Lecture Notes in
Mathematics 1888). Springer, New York, 2006
[14] J. van der Hoeven, Transserial Hardy fields. preprint, 2006
[15] S. Kuhlmann, Ordered Exponential Fields. American Mathematical Society, Prov-
idence, RI, 2000
[16] S. Scheinberg, Power series in one variable. J. Math. Anal. Appl. 31 (1970) 321–
333
[17] M. C. Schmeling, Corps de transséries. Ph.D. thesis, Université Paris VII, 2001

36

You might also like