Jia 2005 221
Jia 2005 221
Jia 2005 221
AND APPLICATIONS
MARCO PAPI
The implicit function theorem asserts that there exists a ball of nonzero radius within
which one can express a certain subset of variables, in a system of equations, as functions
of the remaining variables. We derive a lower bound for the radius of this ball in the case
of Lipschitz maps. Under a sign-preserving condition, we prove that an implicit function
exists in the case of a set of inequalities. Also in this case, we state an estimate for the
size of the domain. An application to the local Lipschitz behavior of solution maps is
discussed.
1. Introduction
The implicit function theorem is one of the fundamental results in multivariable anal-
ysis [1, 8, 11]. It asserts that if Fi (x, y), i = 1,...,n, x ∈ Rm , y ∈ Rn , are countinuously
differentiable functions in a neighborhood of a point (x0 , y0 ), where Fi (x0 , y0 ) = 0, for
i = 1,...,n, and the Jacobian
∂Fi
D y F x0 , y 0 = x0 , y 0 (1.1)
∂y j 1≤i, j ≤n
is invertible, then there exist a positive number r > 0 and continuous functions g1 (x),...,
gn (x), defined in the domain B = {x ∈ Rm : |x − x0 | < r }, such that gi (x0 ) = y0i and
Fi (x,g1 (x),...,gn (x)) = 0, for i = 1,...,n, in B. This theorem has been extended to Lip-
schitz functions by Clarke [4, 5]. In this case F = (F1 ,...,Fn ) is a locally Lipschitz func-
tion in a neighborhood of (x0 , y0 ) and the invertibility assumption is required for all the
matrices of the generalized Jacobian of F at (x0 , y0 ).
Despite the central role played by this result in analysis, multidimensional nonlinear
optimization algorithms [2, 7, 16, 17], and in developing Newton-type methods for solv-
ing nonsmooth equations [12, 13, 18], a lower bound for the size of the domain B has
not been sufficiently investigated in the literature. The first nontrivial estimate has been
reported in [3] for the case of complex analytic functions. The authors base their result
on the Roche theorem to derive a lower bound in the case n = 1, then they recursively
extend this estimate to the general case.
Some important problems, like those which appear in sensitivity and stability analysis
of systems of equations and inequalities [14, 15], do not show strong regularity proper-
ties. Therefore, over the years, a great deal of attention has been focused on developing
new tools for maps not necessarily differentiable.
Given the high relevance played by the Lipschitz continuity, one of the purposes of
this paper is to establish an estimate for the size of the domain B and consequently of
the set of values of the implicit function, in the context of Lipschitz continuous maps.
These estimates can be applied for proving the upper-Lipschitz continuity [19] of some
set-valued maps. This is a recently introduced concept of regularity which turns out to
be quite natural in nonlinear optimization. For illustration, we consider the question of
the local Lipschitz behavior of the map “parameter x maps to set of solutions of x ∈
f (y) + C,” where x ∈ Rm , y ∈ Rn , f : Rn → Rm is a Lipschitz function and C ⊂ Rm is
closed; more precisely, we consider the following map:
x ∈ Rm −→ S(x) = y ∈ Rn : x ∈ f (y) + C . (1.2)
are mainly used for constructing effective numerical algorithms. Actually, by Newton’s
method [18] applied to the problem x ∈ f (y) + C, we mean the following procedure
which generates a sequence { y1 , y2 ,...}, with a given starting point y0 , according to the
rule
x ∈ f yn + ∂ f yn · yn+1 − yn + C, (1.3)
where ∂ f denotes the generalized Jacobian of f . For given x and y, let { yn } be a New-
ton sequence, that is, a sequence starting from y and satisfying (1.3). Denote by N(x, y)
the set of all Newton sequences for x, starting from the point y. Then in [6] it is proved
that the local upper-Lipschitz continuity of S implies that every Newton sequence, within
a sufficiently small ball around the solution, is convergent. Moreover the radius of this
ball is controlled by a constant which depends on the local Lipschitz behavior of S. Actu-
ally, in establishing the upper-Lipschitz continuity, we have to estimate three parameters
which characterize this property and, as shown in [6], these numbers can be used to de-
rive the rate of convergence of Newton sequences. As shown in Section 4, the implicit
function theorems we prove can be used to find lower bounds for these three constants,
see Proposition 4.2.
In many applications, as in the problem mentioned above, we are mainly interested in
finding an implicit function for a set of inequalities (i.e., Fi ≤ 0, for 1 ≤ i ≤ n), where the
variable y is constrained to stay in a closed convex set Ω ⊂ Rn . In this case, we cannot
apply the classical version of the implicit function theorem because the implicit function
has to map the variable x into the set Ω. Moreover the reference point y0 can lie on the
boundary of Ω, while the map F could not be continuously differentiable in a neigh-
borhood of this point, as required by the classical version of the implicit function theo-
rem. Another interesting case appears when the map F is Lipschitz continuous around
the reference point (x0 , y0 ), but the generalized Jacobian at this point contains singular
Marco Papi 223
matrices. In such a situation, we cannot apply the Lipschitz version of that result, see
Example 4.7.
Then the question is can we construct an implicit function g(x) = (g1 (x),...,gn (x))
such that Fi (x,g(x)) ≤ 0, for 1 ≤ i ≤ n, in some neighborhood of x0 ?
A second purpose of this work, is to give an answer to this question. By requiring a
sign-preserving condition on the Jacobian, we will prove that an implicit function exists,
see Theorem 3.4. This result can be used to study the local Lipschitz properties of the
solution map (1.2). Therefore, also for this version of the implicit function theorem, we
state a lower bound for the size of the domain of the implicit function.
The outline of the paper is as follows. Section 2 provides some basic definitions and
notations used in the rest of the paper. In Section 3, we state our main results. In Section 4,
we discuss the connection between our extensions of the implicit function theorem and
the local Lipschitz behavior of the solution map (1.2). Furthermore, for an easier com-
prehension of our assumptions and results, in this section, we present some examples.
Finally Section 5 is devoted to the proof of the results.
Let be a set of real square matrices of the same order. We say that is invertible if
every matrix in is invertible and we will denote by −1 the inverse of , that is,
−1 = S−1 : S ∈ . (2.2)
224 The domain of the implicit function
∞ = sup S . (2.3)
S∈
Definition 2.1. Let S ∈ Rn×n , then S is sign preserving if and only if the following holds
true:
S· p ≥ 0 ∀ p ∈ Rn s.t. p ≥ 0. (2.5)
F(x) − F(x )
Lip(F) = sup . (2.6)
x,x ∈A |x − x |
x=x
Let D(F) denote the set of all points in A where F is differentiable, then by Rademacher’s
theorem, D(F) is of full Lebesgue measure on A. Therefore we can define the generalized
Jacobian of F at a point x ∈ A:
∂F(x) = co P ∈ Rm×n : P = lim DF xh , xh ∈ D(F) , (2.7)
xh →x
where the notation “co” indicates the convex hull. We define the generalized partial de-
rivative of F at x = (x1 ,x2 ) ∈ Rn1 × Rn2 , with n1 + n2 = n, as
∂x1 F x1 ,x2 = P ∈ Rm×n1 : ∃Q ∈ Rm×n2 , s.t. [P,Q] ∈ ∂F(x) . (2.8)
In a similar way, one can define ∂x2 F(x1 ,x2 ). Some fundamental properties of the gener-
alized Jacobian are summarized below.
(6) Let Γ be a set-valued map from A ⊂ Rm to the subsets of Rn . For every U ⊂ A, we
set
3. Main results
In this section, we present our main results. The first version of the implicit functions
theorem is concerned with the case of Lipschitz maps, and here we estimate the size of
the neighborhoods where the implicit function is defined. The proof of this result is based
on a fundamental inclusion proved in Proposition 5.1.
As explained in Section 4, the other result we present allows to deal with the following
situations.
(1) The reference point y0 lies on the boundary of the set where the map F is defined.
(2) The map F is Lipschitz continuous around the reference point but the generalized
Jacobian is not invertible (see point (4) in Section 2).
In these cases, assuming a sign-preserving condition (see Definition 2.1), we prove the
existence of an implicit function for the system of inequalities F ≤ 0. Furthermore, as for
the previous result, we state a lower bound for the size of the domain where this function
is defined.
Theorem 3.1. Let F : ᏻ → Rn be a Lipschitz map defined in the open set ᏻ ⊂ Rm × Rn .
Let (x0 , y0 ) ∈ ᏻ, L > 0, λ > 0, and r > 0 be such that Br (x0 ) × Br (y0 ) ⊂ ᏻ and the following
hypotheses hold:
(i) F(x0 , y0 ) = 0;
(ii) (∂ y F(x0 , y0 ))−1 ∞ < λ;
(iii) ∂x F(Br (x0 ) × Br (y0 )) ∞ ≤ L.
Let r2 = r1 /2, = max(1,(1 + L)λ), where
r1 = sup ρ ∈ [0,r] : co ∂ y F Bρ x0 × Bρ y0 ⊂ (λ,n) . (3.1)
Then there exists a unique function g ∈ C(Br2 (x0 );Br1 (y0 )) satisfying g(x0 ) = y0 and
F x,g(x) = 0 ∀ x ∈ Br 2 x 0 . (3.2)
Remark 3.2. By the properties recalled in Proposition 2.3, it is clear that the set-valued
map ρ → co{∂ y F(Bρ (x0 ) × Bρ (y0 ))}, defined for ρ ∈ [0,r], is u.s.c. at 0. By (ii), ∂ y F(x0 , y0 )
⊂ (λ,n) and by (2.4), (λ,n) is open in Rn×n , therefore the number r1 in (3.1) is well
defined. We observe that the radius r2 represents a lower bound for the size of the domain
of the implicit function.
Remark 3.3. The uniqueness of the implicit function g in Theorem 3.1 implies that any
point (x, y) ∈ Br2 (x0 ) × Br1 (y0 ) such that F(x, y) = 0 belongs to the graph of g, that is,
g(x) = y.
Theorem 3.4. Let F : ᏻ → Rn be a continuous function in the open set ᏻ ⊂ Rm × Rn . Let
ε > 0, (x0 , y0 ) ∈ ᏻ, r1 , r2 > 0 be such that, if B1+ = { y ∈ Rn : 0 ≤ y − y0 ≤ r1 } and B2 = {x ∈
Rm : |x − x0 | ≤ r2 }, then B2 × B1+ ⊂ ᏻ and the following conditions hold:
() F(x0 , y0 ) ≤ 0;
() F ∈ C 1 (B2 × B1+ );
() −D y F(x0 , y0 ) is invertible and sign preserving;
() the following inequalities are satisfied:
r1
sup F x, y0 − F x0 , y0 ≤ √ , (3.4)
x∈B2 (1 + ε n) T0
ε −1
sup In − T0 · D y F ≤ √ , T0 = D y F x0 , y0 . (3.5)
B2 ×B1+ 1+ε n
Then for any 0 < a < δ1 , 0 < b < δ2 , S is locally upper-Lipschitz continuous at (x∗ , y ∗ ), with
constants a and b for neighborhoods and c for growth.
Remark 4.3. With regard to the role of the parameter a in the convergence of Newton
sequences, we observe that in [6, Theorem 3.1 and Corollary 3.1], sufficient conditions
are given so that any sequence { yn } obtained from the algorithm
x = f yn + Hn · yn+1 − yn , n = 0,1,2,... , (4.4)
Remark 3.3 implies that y = g(x), and the inequality (3.3) yields
y − y ∗ = g(x) − g x∗ ≤ cx − x∗ . (4.6)
Therefore, y ∈ y ∗ + c|x − x∗ | clB1 (0). In light of Definition 4.1, this proves the assertion.
Remark 4.4. We observe that under the same assumptions, the argument used in the
proof of Proposition 4.2 still works in the case of a system of inequalities (i.e., f (y) − x ≤
0). In this situation it suffices to consider the map F(x, y) = [ f (y) − x] − [ f (y ∗ ) − x∗ ].
228 The domain of the implicit function
In the following example, we construct a lower bound for the supremum in (3.1).
Example 4.5. Let α,β,γ be real-valued, Lipschitz continuous functions on Rm such that
∂ y F Bρ (0) × (−ρ,ρ) ⊂ α(x),β(x) , (4.11)
|x|<ρ
co ∂ y F Bρ (0) × (−ρ,ρ) ⊂ S(λ,1). (4.13)
Hence, by (3.1), the number ρ belongs to the set whose supremum is r1 , so r1 ≥ ρ. By the
arbitrary choice of ρ > 0 satisfying (4.10), we get the following estimate:
α(0) − λ−1
r1 ≥ . (4.14)
Lip(α)
Marco Papi 229
Remark 4.6. The typical cases where we can apply Theorem 3.4, are the following.
(1) Let G ∈ C 1 (Rm × clΩ; Rn ), where Ω ⊂ Rn is an open convex set, and consider x0 ∈
R , y0 ∈ ∂Ω, such that G(x0 , y0 ) ≤ 0. Then it is easy to show that there exists at least a
m
basis v = {v1 ,v2 ,...,vn } in Rn and a positive number r such that the “cones”
n
Cr y0 ;v = y0 + yi vi : yi ∈ [0,r] ∀i = 1,...,n ,
i=1
(4.15)
n
Cr+ y0 ;v = y0 + yi vi : yi ∈ (0,r] ∀i = 1,... ,n
i=1
satisfy Cr (y0 ;v) ⊂ clΩ and Cr+ (y0 ;v) ⊂ Ω. For instance, if Ω = { y ∈ Rn : yn < 0} and
y0 = 0, then we can consider the system v = {e1 ,e2 ,..., −en }, {ei }1≤i≤n being the standard
orthonormal basis of Rn . With this choice, the previous inclusions hold for any r > 0.
For a basis v = {v1 ,v2 ,...,vn } satisfying the previous assumptions, consider the func-
tion F defined by
F(x, y) = G x, y0 + V · y , (4.16)
where V ∈ Rn×n is the matrix whose columns are the vectors v1 ,v2 ,...,vn . Then F ∈
C 1 (Rm × [0,r]n ); if D y G(x0 , y0 ) is invertible and the following holds:
p ∈ Rn , V −1 · p ≥ 0 =⇒ D y G x0 , y0 · p ≤ 0, (4.17)
then F satisfies the hypotheses of Theorem 3.4 at (x0 ,0): in fact, given the regularity as-
sumptions on G, it is always possible to solve the inequalities (3.4), (3.5) to determine
r1 ≤ r and r2 . Therefore, there exists a Lipschitz continuous function g : B2 → [0,r1 ]n such
that g(x0 ) = 0 and (3.6) holds true. In light of the requirements on v, the Lipschitz con-
tinuous function f = y0 + V · g is clΩ-valued, and
f x0 = y 0 , G x, f (x) ≤ 0 ∀x ∈ B2 . (4.18)
which contains 0. Therefore we cannot apply the Lipschitzian version of the implicit func-
tion theorem at (0,0). Nevertheless, by considering the restriction of F to the domain
230 The domain of the implicit function
R × [0, ∞), we get F ∈ C 1 (R × [0, ∞)); moreover, using the convention described in the
point (2) of section 2, we have
Therefore −D y F(0,0) is invertible and sign preserving, and we can apply Theorem 3.4 to
F at (0,0) obtaining an implicit function for the inequality F ≤ 0.
Proof of Theorem 3.1. Let f (x, y) = (F(x, y),x) for (x, y) ∈ Br (ξ0 ) where ξ0 = (x0 , y0 ) ∈
Rm+n . We have
∂x F(x, y) ∂ y F(x, y)
∂ f (x, y) = ∀(x, y) ∈ Br ξ0 , (5.4)
Im O
where O denotes a null matrix of order m × n. Let 0 < ρ < r1 , then by (3.1) we have the
inclusion
co ∂ y F Bρ x0 × Bρ y0 ⊂ S(λ,n). (5.5)
By (5.4), if H ∈ co{∂ f (Bρ (ξ0 ))}, then there exist Q ∈ co{∂x F(Bρ (x0 ) × Bρ (y0 ))}, P ∈
co{∂ y F(Bρ (x0 ) × Bρ (y0 ))} such that
Q P
H= , (5.6)
Im O
and by the invertibility of P we have
O Im
H −1 = . (5.7)
P −1 −P −1 · Q
The assumption (iii) yields Q ≤ L. Hence, using (5.7) and the definition of S(λ,n) in
(2.4), we easily get
−1
H ≤ max 1,(1 + L)λ = . (5.8)
Marco Papi 231
Let x ∈ Bρ/2 (x0 ), then by (5.9) we find (x , y) ∈ Bρ (x0 , y0 ) such that f (x , y) = (0,x). By
the definition of f , x = x and we set g(x) = y ∈ Bρ (y0 ). We prove that g is well defined.
If there exist y1 , y2 ∈ Bρ (y0 ) such that f (x, y1 ) = f (x, y2 ) = (0,x), then by Theorem 2.4
we have
0 = F x, y1 − F x, y1 ∈ co ∂ y F x, y1 , y2 · y1 − y2 . (5.10)
Since ρ/2 < ρ < r1 and using (5.5), we obtain y1 = y2 . By construction and (i), we get
g(x0 ) = y0 and the equation (3.2) in Bρ/2 (x0 ). We show that g is Lipschitz continuous. Let
x1 ,x2 ∈ Bρ/2 (x0 ) and g1 = g(x1 ), g2 = g(x2 ), then by (3.2) and the mean value theorem,
we have
0 = F x1 ,g1 − F x2 ,g2 = F x1 ,g1 − F x1 ,g2 + F x1 ,g2 − F x2 ,g2
(5.11)
= P · g 1 − g 2 + Q · x1 − x2
for some P ∈ co{∂ y F(x1 ,[g1 ,g2 ])} and Q ∈ co{∂x F([x1 ,x2 ],g2 )}. Since [x1 ,x2 ] ⊂ Bρ (x0 )
and [g1 ,g2 ] ⊂ Bρ (y0 ), by (5.5) and (iii), we can write
1
|P · x| ≥ |x| ∀x ∈ Rn , Q ≤ L. (5.12)
λ
Using (5.12) in (5.11), we get
g1 − g2 ≤ λP · g1 − g2 = λQ · x1 − x2 ≤ λLx1 − x2 . (5.13)
For any fixed ρ ∈ (0,r1 ), the relation (5.10) implies the uniqueness of g. The previous
analysis proves that the following set is nonempty:
X = (g,ρ) : ρ ≤ r1 , g ∈ C Bρ/2 x0 ; Bρ y0 , g x0 = y0 , g satisfies (3.2)
(5.14)
and (3.3) in Bρ/2 x0 .
and h extends g. By a standard argument based on the Zorn lemma, we deduce that X
admits a maximal element with respect to the relation ≤X . Let (g, ρ) ∈ X be such ele-
ment, then ρ = r1 . Otherwise, for a fixed ρ < ρ1 < r1 , we can repeat the previous con-
struction to find a function g1 ∈ C(Bρ1 /2 (x0 ); Bρ1 (y0 )) satisfying g(x0 ) = y0 , (3.2), and
(3.3) over Bρ1 /2 (x0 ). Since (g1 ,ρ1 ) ∈ X and g1 is unique on its domain of definition, we
≤X (g1 ,ρ1 ). Since ρ < ρ1 , this yields a contradiction proving that ρ = r1 . Hence
have (g, ρ)
the maximal element of X is the implicit function we are seeking. This concludes the
proof.
232 The domain of the implicit function
Proof of Proposition 5.1. Applying the mean value Theorem 2.4, we have
f ξ0 + h − f ξ0 ∈ co ∂ f ξ0 + th : t ∈ [0,1] · h ∀|h| < r, (5.16)
and by the hypotheses, for every S ∈ co{∂ f (Br (ξ0 ))}, we have |Sx| ≥ −1 |x| for any x ∈
Rn , therefore (5.2) follows. To prove the inclusion (5.3), consider r > ρ > 0 and η ∈
Bρ/2 (η0 ), η0 being f (ξ0 ). Let Φ(ξ) = |η − f (ξ)|2 . This function takes its minimum over
cl Bρ (ξ0 ) in a point ξ. In fact ξ ∈ Bρ (ξ0 ), otherwise we have |ξ − ξ0 | = ρ and by (5.2) we get
η − f ξ0 ≥ f ξ0 − f (ξ) − η − f (ξ) ≥ ρ − η − f ξ0 (5.17)
implying |η − η0 | ≥ ρ/2, which is false of course. Therefore we conclude that ξ lies in the
interior of the ball. Now if η = f (ξ), we are done. Otherwise, by the optimality condition
[9], it holds that
0 ∈ 2 f (ξ) − η · ∂ f (ξ), (5.18)
where (·) denotes the transpose. Since ∂ f (ξ) contains only invertible matrices, (5.18) is
true only when f (ξ) − η = 0, and this is a contradiction. Therefore we have proved the
following inclusion:
Bρ/2 η0 ⊂ f Bρ ξ0 (5.19)
for any 0 < ρ < r. By the arbitrary choice of ρ, we obtain the inclusion (5.3).
Proof of Theorem 3.4. Let G(·, ·) = F(·, ·) − F(x0 , y0 ), then we use a fixed-point argument
applied to the following map:
Φ : C B2 ;B1+ −→ C B2 ;B1+ ,
(5.20)
Φ(v)(x) = y0 + v(x) − T0 · G x,v(x) − y0 + ∀v ∈ C B2 ;B1+ , x ∈ B2 ,
where T0 = (D y F(x0 , y0 ))−1 and [ξ]+ denotes the vector whose entries are the positive
parts of the components of ξ. We consider the supremum norm on C(B2 ;B1+ ). Using
the assumption (), we show that Φ is well defined and is a contraction. Let v,v ∈
C(B2 ;B1+ ), x ∈ B2 ; since the Lipschitz constant of τ ∈ R → [τ]+ is 1 and B1+ is convex, by
() and (3.5) we have
Φ(v)(x) − Φ(v )(x) ≤ v(x) − T0 · G x,v(x) − v (x) + T0 · G x,v (x)
≤ sup In − T0 · D y F v(x) − v (x)
B2 ×B1+ (5.21)
ε
≤ √ v(x) − v (x).
1+ε n
Marco Papi 233
This yields Φ(v)(x) ∈ B1+ and it proves that Φ is well defined. Since C(B2 ;B1+ ) is a closed
subspace of the Banach space C(B2 ; Rn ), we can apply the fixed-point theorem getting a
unique function g ∈ C(B2 ;B1+ ) such that
g(x) = y0 + g(x) − T0 · G x,g(x) − y0 + ∀x ∈ B2 . (5.23)
References
[1] R. B. Ash, Complex Variables, Academic Press, New York, 1971.
[2] D. P. Bertsekas, Nonlinear Programming, 2nd ed., Athena Scientific, Massachusetts, 1999.
[3] H. C. Chang, W. He, and N. Prabhu, The analytic domain in the implicit function theorem,
JIPAM. J. Inequal. Pure Appl. Math. 4 (2003), no. 1, Article 12, 1–5.
[4] F. H. Clarke, On the inverse function theorem, Pacific J. Math. 64 (1976), no. 1, 97–102.
[5] , Optimization and Nonsmooth Analysis, Canadian Mathematical Society Series of
Monographs and Advanced Texts, John Wiley & Sons, New York, 1983.
[6] A. L. Dontchev, Lipschitzian stability of Newton’s method for variational inclusions, System Mod-
elling and Optimization (Cambridge, 1999), Kluwer Academic Publishers, Massachusetts,
2000, pp. 119–147.
[7] R. Fletcher, Practical Methods of Optimization, 2nd ed., Wiley-Interscience, New York, 2001.
[8] L. Hörmander, An Introduction to Complex Analysis in Several Variables, 2nd revised ed., North-
Holland Mathematical Library, vol. 7, North-Holland Publishing, Amsterdam, 1973.
[9] V. Jeyakumar and D. T. Luc, Nonsmooth calculus, minimality, and monotonicity of convexifica-
tors, J. Optim. Theory Appl. 101 (1999), no. 3, 599–621.
[10] , An open mapping theorem using unbounded generalized Jacobians, Nonlinear Anal. Ser.
A: Theory Methods 50 (2002), no. 5, 647–663.
[11] S. G. Krantz and H. R. Parks, The Implicit Function Theorem. History, Theory, and Applications,
Birkhäuser Boston, Massachusetts, 2002.
[12] Z.-Q. Luo, J.-S. Pang, and D. Ralph, Mathematical Programs with Equilibrium Constraints,
Cambridge University Press, Cambridge, 1996.
[13] R. Mifflin, Semismooth and semiconvex functions in constrained optimization, SIAM J. Control
Optimization 15 (1977), no. 6, 959–972.
[14] B. Mordukhovich, Lipschitzian stability of constraint systems and generalized equations, Nonlin-
ear Anal. 22 (1994), no. 2, 173–206.
[15] , Sensitivity analysis for constraint and variational systems by means of set-valued differ-
entiation, Optimization 31 (1994), no. 1, 13–46.
[16] S. Nash and A. Sofer, Linear and Nonlinear Programming, McGraw-Hill, New York, 1995.
[17] J. Nocedal and S. J. Wright, Numerical Optimization, Springer Series in Operations Research,
Springer-Verlag, New York, 1999.
[18] L. Q. Qi and J. Sun, A nonsmooth version of Newton’s method, Math. Programming 58 (1993),
no. 3, Ser. A, 353–367.
[19] S. M. Robinson, Generalized equations and their solutions. I. Basic theory, Math. Programming
Stud. (1979), no. 10, 128–141.
Marco Papi: Istituto per le Applicazioni del Calcolo “Mauro Picone,” viale del Policlinico 137,
00161 Roma, Italy; Dipartimento di Matematica, Università degli Studi di Roma “Tor Vergata,” via
della Ricerca Scientifica, 00133 Roma, Italy
E-mail addresses: papi@iac.rm.cnr.it; ivnpp@tiscalinet.it