Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Analytical Solutions to the Singular Problem for a System of Nonlinear Parabolic Equations of the Reaction-Diffusion Type
Next Article in Special Issue
The Generalized Trust-Region Sub-Problem with Additional Linear Inequality Constraints—Two Convex Quadratic Relaxations and Strong Duality
Previous Article in Journal
On the Importance of Asymmetry in the Phenotypic Expression of the Genetic Code upon the Molecular Evolution of Proteins
Previous Article in Special Issue
The Inertial Sub-Gradient Extra-Gradient Method for a Class of Pseudo-Monotone Equilibrium Problems
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Implicit Extragradient-Like Method for Fixed Point Problems and Variational Inclusion Problems in a Banach Space

Department of Liberal Arts, Gyeongnam National University of Science and Technology, Jinju-Si 52725, Gyeongsangnam-do, Korea
Symmetry 2020, 12(6), 998; https://doi.org/10.3390/sym12060998
Submission received: 7 May 2020 / Revised: 31 May 2020 / Accepted: 3 June 2020 / Published: 11 June 2020
(This article belongs to the Special Issue Symmetry in Nonlinear Functional Analysis and Optimization Theory)

Abstract

:
In a uniformly convex and q-uniformly smooth Banach space with q ( 1 , 2 ] , one use VIP to indicate a variational inclusion problem involving two accretive mappings and CFPP to denote the common fixed-point problem of an infinite family of strict pseudocontractions of order q. In this paper, we introduce a composite extragradient implicit method for solving a general symmetric system of variational inclusions (GSVI) with certain VI and CFPP. We then investigate its convergence analysis under some weak conditions. Finally, we consider the celebrated LASSO problem in Hilbert spaces.

1. Introduction-Preliminaries

Throughout this article, one always supposes that H is a real infinite dimensional Hilbert space endowed with norm · and inner product · , · . Let the nonempty subset C H be convex and closed, and let the mapping P C be the nearest point (metric) projection of H onto C. Let A : C H be a nonself mapping. The celebrated variational inequality problem (VIP) is to find an element x * C s.t. y x * , A x * 0 , for all y C . One denotes by VI( C , A ) the set of all solutions of the VIP. The VIP acts as a unified framework for lots of real industrial and applied problems, such as, machine learning, transportation, image processing and economics; see, e.g., [1,2,3,4,5,6,7]. One knows that Korpelevich’s extragradient algorithm [8] is now one of the most popular algorithms to numerically solve the VIP. It reads as follows:
x j + 1 = P C ( x j A y j ) j 0 , y j = P C ( x j A x j ) ,
with ( 0 , 1 L ) , where L is the Lipschitz constant of the mapping A. If its solution set is not empty, it was obtained that sequence { x j } is weakly convergent. The price for the weak convergence is that A must be a Lipschitz and monotone mapping. To date, now, the extragradient method and relaxed extragradient methods have received much attention and has been studied extensively; see, e.g., [9,10,11,12,13]. Next, one assumes that B is monotone set-valued mapping defined on H and A is a monotone single-valued mapping defined on H, respectively. The so-called variational inclusion problem is to find an element x * H s.t. 0 ( A + B ) x * and it has recently been studied by many authors based on splitting-based approximation methods; see, e.g., [14,15,16,17]. This model provides a unified framework for a lot of theoretical and practical problems and was investigated via different methods [18,19,20,21,22]. To the best of the authors’ knowledge, there are no few associated results obtained in Banach spaces. Next, one will turn our attention to Banach spaces.
Next, E * will be used to present be the dual space of Banach space E. Let C E be a convex and closed set. Given a nonlinear mapping T on C, one uses the symbol Fix ( T ) to denote the set of all the fixed points of T. Recall that the mapping T is said to be a Lipschitz mapping if and only if, x , y C , T x T y L x y . If the Lipschitz constant L is just one, one say that T is a nonexpansive mapping whose complementary mappings are monotone.
Recall that the duality mapping J q : E 2 E * is defined by
J q ( x ) = { ϕ E * : x , ϕ = x q , ϕ = x q 1 } .
For a particular case, one uses J to stand for J 2 , which is commonly called the normalized duality. Recall that a mapping T defined on C is said to be a strict pseudocontraction of order q if, x , y C , there is j q ( x y ) J q ( x y ) such that the following inequality holds T x T y , j q ( x y ) x y q ς ( I T ) x ( I T ) y q for some ς > 0 .
The convexity modulus of space E, δ E , which maps the interval ( 0 , 2 ] to the interval [ 0 , 1 ] , is defined as follows
δ E ( ϵ ) = inf { 2 x + x 2 : x , x E , ϵ x x , x = 1 = x } .
The smoothness modulus of space E, ρ E , which maps the interval [ 0 , ) to the interval [ 0 , ) , is defined as follows
ρ E ( τ ) = sup { τ x + x + τ x x 2 2 : x , x E , x = x = 1 } .
Recall that a space E is said to be uniformly convex if δ E ( e ) > 0 , e ( 0 , 2 ] . Recall that a space E is said to be uniformly smooth if lim τ 0 + ρ E ( τ ) τ = 0 . Further it is said to be q-uniformly ( q > 1 ) smooth if c > 0 s.t. c s q ρ E ( s ) , s > 0 . In q-uniformly ( q > 1 ) smooth spaces, one has the following celebrated inequality, for some j q ( x + y ) J q ( x + y ) ,
x + y q x q q y , j q ( x + y ) x , y E .
One says that an operator Π : C D , where D is convex and closed subset of C, is said to be a sunny mapping if, ( 1 ξ ) Π ( x ) + ξ x C for ξ 0 and x C , Π ( x ) = Π [ Π ( x ) + ξ ( x Π ( x ) ) ] . If Π is both nonexpansive and sunny, then x ¯ x , J ( Π ( x ¯ ) Π ( x ) ) Π ( x ¯ ) Π ( x ) 2 , x ¯ , x C .
In the setting of q-uniformly smooth spaces, we recall that an operator B is said to be accretive if, for some j q ( x x ¯ ) J q ( x x ¯ ) , u v , j q ( x x ¯ ) 0 , for all u B x , v B x ¯ . An accretive operator B is said to be inverse-strongly of order q if, for each x , x ¯ C , j q ( x x ¯ ) J q ( x x ¯ ) s.t. u v , j q ( x x ¯ ) α u v q , for all u B x , v B x ¯ for some α > 0 . An accretive operator B is said to be m-accretive if ( I + λ B ) C = E for all λ > 0 . Furthermore, one can define a mapping J λ B : ( I + λ B ) C C by J λ B = ( I + λ B ) 1 for each λ > 0 . Such J λ B is called the resolvent of B for λ > 0 . In the sequel, we will use the notation T λ : = J λ B ( I λ A ) = ( I + λ B ) 1 ( I λ A ) λ > 0 . From [9], we have Fix ( T λ ) = ( A + B ) 1 0 λ > 0 and y T λ y 2 y T r y for 0 < λ r and y C . From [23], one has J λ B x = J μ B ( μ λ x + ( 1 μ λ ) J λ B x ) for all λ , μ > 0 , x E and Fix ( J λ B ) = B 1 0 , where B 1 0 = { x C : 0 B x } .
Let A 1 , A 2 : C E and B 1 , B 2 : C 2 E be nonlinear mappings with B k x x C , k = 1 , 2 . Consider the symmetrical system of variational inclusions, which consists of finding the pair ( x * , y * ) in C × C s.t.
0 ζ 1 ( A 1 y * + B 1 x * ) + x * y * , 0 ζ 2 ( A 2 x * + B 2 y * ) + y * x * ,
where ζ k is a positive constant for k = 1 , 2 . Ceng, Postolache and Yao [24] obtain the fact that problem (2) is equivalent to a fixed point problem.
Based on the equivalent relation, Ceng et al. [24] suggested a composite viscosity implicit rule for solving the GSVI (2) as follows:
y j = J ζ 2 B 2 ( x j ζ 2 A 2 x j ) , x j = α j f ( x j 1 ) + δ j x j 1 + β j V x j 1 + γ j [ μ S x j + ( 1 μ ) J ζ 1 B 1 ( y j ζ 1 A 1 y j ) ] j 1 ,
where μ ( 0 , 1 ) , S : = ( 1 α ) I + α T with 0 < α < min { 1 , 2 λ κ 2 } , and the sequences { α j } , { δ j } , { β j } , { γ j } ( 0 , 1 ) are such that (i) α j + δ j + β j + γ j = 1 j 1 ; (ii) lim j α j = 0 , lim j β j α j = 0 ; (iii) lim j γ j = 1 ; (iv) j = 0 α j = . They proved that { x j } converges strongly to a point of Fix ( G ) Fix ( T ) , which solves a certain VIP.
In this article, we introduce and investigate an implicitly composite solution method to solve the GSVI (2) with certain VIP and CFPP constraints. We then analyze convergence of the suggested method in the setting of real Hilbert spaces under some mild conditions. An application is also considered.
From now on, one always uses κ q > 0 to denote the smoothness coefficient; see [25,26]. One also lists some essential lemmas for the strong convergence theorem next.
Lemma 1
([27]). Let E be q-uniformly smooth with q ( 1 , 2 ] , and C E a closed convex set. Let T : C C be a ς-strict pseudocontraction of order q. Given α ( 0 , 1 ) . Define a single-valued nonlinear mapping T α : C C by T α x = ( 1 α ) x + α T x . Then T α is nonexpansive with Fix ( T α ) = Fix ( T ) provided 0 < α ( ς q κ q ) 1 q 1 .
Lemma 2
([28]). Let E be q-uniformly smooth with q ( 1 , 2 ] . Suppose that A : C E is an α-inverse-strongly accretive mapping of order q. Then, for any given λ 0 ,
λ ( α q κ q λ q 1 ) A x A y q + ( I λ A ) x ( I λ A ) y q x y q , x , y C .
Lemma 3
([26,28]). Let q ( 1 , 2 ] be a fixed real number and let E be a q-uniformly smooth Banach space. Then x + y q q y , J q ( x ) + x q + κ q y q , x , y E . Let B 1 , B 2 : C 2 E be two m-accretive operators. Let A i : C E ( i = 1 , 2 ) be σ i -inverse-strongly accretive mapping of order q for each i = 1 , 2 . Define an operator G : C C by
G : = J ζ 1 B 1 ( I ζ 1 A 1 ) J ζ 2 B 2 ( I ζ 2 A 2 ) .
If 0 ζ i ( σ i q κ q ) 1 q 1 ( i = 1 , 2 ) , then G is a nonexpansive mapping.
The following lemmas was proved in [29], which is a simple extension of the inequality established in [26].
Lemma 4.
In a uniformly convex real Banach space E , there is a convex, strictly increasing, continuous function g, which maps [ 0 , + ) to [ 0 , + ) with g ( 0 ) = 0 such that
λ x + μ x ¯ + ν x ˜ q λ x q + μ x ¯ q + ν x ˜ q W q g ( x ¯ x ˜ ) ,
where q > 1 is any real number and W q is a real function associated with μ and ν, for all x , x ¯ , x ˜ { x E : x r } (r is some real number) and λ , μ , ν [ 0 , 1 ] such that λ + ν + μ = 1 .
Lemma 5
([26]). If E be a uniformly convex Banach space, then there exist a convex, strictly increasing, continuous function h, which maps [ 0 , + ) to [ 0 , + ) with h ( 0 ) = 0 such that
h ( x y ) x q q x , j q ( y ) + ( q 1 ) y q ,
where x and y are in some bounded subset of E and j q ( y ) J q ( y ) .
Lemma 6
([30]). Let { S n } n = 0 be a sequence of self-mappings on C such that
n = 1 sup x C S n x S n 1 x < .
Then, for each y C , { S n y } converges strongly to some point of C. Moreover, let S be the self-mapping on C defined by S y = lim n S n y for all y C . Then lim n sup x C S n x S x = 0 .
Lemma 7
([31]). Let E be a strictly convex Banach space. Let T n be a nonexpansive mapping defined on a convex and closed subset C of E for each n 1 . Let n = 0 Fix ( T n ) be nonempty. Let { λ n } be a positive sequence such that n = 0 λ n = 1 . Then a mapping S on C defined by S x = n = 0 λ n T n x x C is well defined, nonexpansive and Fix ( S ) = n = 0 Fix ( T n ) holds.
Lemma 8
([32]). Let E be a smooth Banach space. Let T : C C , where C is convex and closed set in E, be a self-nonexpansive mapping. Suppose that λ is constant in the interval ( 0 , 1 ) . Then { z λ } , where z λ = ( 1 λ ) T z λ + λ u , converges strongly to a fixed point x * Fix ( T ) , which solves u x * , J ( x * x ) 0 for all x Fix ( T ) .
Lemma 9
([33]). Let { a n } be a sequence in [ 0 , ) such that a n + 1 ( 1 s n ) a n + s n ν n n 0 , where { s n } and { ν n } satisfy the conditions: (i) { s n } [ 0 , 1 ] , n = 0 s n = ; (ii) lim sup n ν n 0 or n = 0 | s n ν n | < . Then lim n a n = 0 .
Lemma 10
([34]). Let { Γ n } be a real sequence such that there exists a real sequence { Γ n i } , which is a subsequence of { Γ n } such that Γ n i + 1 > 1 Γ n i for each integer i 1 . Let { τ ( n ) } n n 0 be a integer Define the sequence defined by τ ( n ) = max { k n : Γ k + 1 > Γ k } , where the integer n 0 1 is chosen in such a way that { k n 0 : Γ k + 1 } > Γ k . (i) Γ τ ( n ) + 1 Γ τ ( n ) and Γ τ ( n ) + 1 Γ n for each n n 0 ; (ii) τ ( n 0 ) τ ( n 0 + 1 ) and τ ( n ) .

2. Results

Throughout this section, suppose that C is a convex closed set in a Banach space E, which is both uniformly convex and q-uniformly smooth with q ( 1 , 2 ] . Let both B 1 : C 2 E and B 2 : C 2 E be m-accretive operators. Let A k : C E be single-valued σ k -inverse strongly accretive mapping for each k = 1 , 2 . Further, one assumes that G : C C is a self mapping defined as G : = J ζ 1 B 1 ( I ζ 1 A 1 ) J ζ 2 B 2 ( I ζ 2 A 2 ) with constants ζ 1 , ζ 2 > 0 . Let S n be a ς -uniformly strictly pseudocontractive mapping for each n 1 . Let A : C E and B : C 2 E be a σ -inverse-strongly accretive mapping of order q and an m-accretive operator, respectively. Assume that the feasibility set Ω : = n = 0 Fix ( S n ) ( A + B ) 1 0 Fix ( G ) is nonempty.
Algorithm 1: Composite extragradient implicit method for the GSVI (2) with VIP and CFPP constraints.
Initial Step. Given ξ ( 0 , 1 ) , α ( 0 , min { 1 , ( ς q κ q ) 1 q 1 } ) . Let x 0 C be an arbitrary initial.
Iteration Steps. Compute x n + 1 from the current x n as follows:
 Step 1. Calculate
w n = s n ( ( 1 ξ ) x n + ξ G w n ) + ( 1 s n ) ( ( 1 α ) x n + α S n x n ) , u n = J ζ 1 B 1 ( I ζ 1 A 1 ) J ζ 2 B 2 ( w n ζ 2 A 2 w n ) ;

 Step 2. Calculate y n = J λ n B ( u n λ n A u n ) ;
 Step 3. Calculate z n = J λ n B ( u n λ n A y n + r n ( y n u n ) ) ;
 Step 4. Calculate x n + 1 = α n u + β n u n + γ n ( ( 1 α ) z n + α S n z n ) , where u is a fixed element in C, { r n } , { s n } , { α n } , { β n } , { γ n } ( 0 , 1 ] with { λ n } ( 0 , ) and α n + β n + γ n = 1 .
 Set n : = n + 1 and go to Step 1.
Lemma 11.
Let the vector sequence { x n } be constructed by Algorithm 1. One has that the sequence { x n } is bounded.
Proof. 
Putting v n = J ζ 2 B 2 ( w n ζ 2 A 2 w n ) and S α , n : = ( 1 α ) I + α S n n 0 , we know from Lemma 1 that each S α , n : C C is nonexpansive with Fix ( S α , n ) = Fix ( S n ) . Let p Ω : = n = 0 Fix ( S n ) Fix ( G ) ( A + B ) 1 0 . Then we observe that p = G p = S n p = J λ n B ( ( 1 r n ) p + r n ( p λ n r n A p ) ) . By use of Lemmas 2 and 3, we deduce that I ζ 1 A 1 , I ζ 2 A 2 and G : = J ζ 1 B 1 ( I ζ 1 A 1 ) J ζ 2 B 2 ( I ζ 2 A 2 ) are nonexpansive mappings. It is clear that there is only one element w n C satisfying
w n = s n ( ( 1 ξ ) x n + ξ G w n ) + ( 1 s n ) ( α S n x n + ( 1 α ) x n ) .
Since G and S α , n are both nonexpansive mappings, we get
w n p s n ( ( 1 ξ ) x n p + ξ G w n p ) + ( 1 s n ) S α , n x n p s n ( ξ w n p + ( 1 ξ ) x n p ) + ( 1 s n ) x n p ,
and hence w n p x n p n 0 . Using the nonexpansivity of G again, we deduce from u n = G w n that u n p w n p x n p . By use of Lemmas 2 and 4, we have
y n p q ( I λ n A ) p ( I λ n A ) u n q u n p q λ n ( σ q κ q λ n q 1 ) A u n A p q ,
which leads to y n p u n p . On the other hand,
z n p q = J λ n B ( ( 1 r n ) u n + r n ( y n λ n r n A y n ) ) J λ n B ( ( 1 r n ) p + r n ( p λ n r n A p ) ) q r n ( I λ n r n A ) y n ( I λ n r n A ) p q + ( 1 r n ) u n p q r n [ y n p q λ n r n ( σ q κ q λ n q 1 r n q 1 ) A y n A p q ] + ( 1 r n ) u n p q λ n ( κ q λ n q 1 r n q 1 σ q ) A y n A p q + r n λ n ( κ q λ n q 1 σ q ) A u n A p q + u n p q .
This ensures that z n p u n p . So it follows from (3.2) that
x n + 1 p α n p u + β n u n p + γ n S α , n z n p α n p u + ( 1 α n ) x n p ,
which leads to x n p max { p u , p x 0 } . □
Theorem 1.
Suppose that { x n } is the vector sequence generated/defined by Iterative Algorithm 1. Assume that the parameter sequences satisfy n = 0 α n = and lim n α n = 0 ; 0 < a β n b < 1 and 0 < c s n d < 1 ; 0 < r r n < 1 and 0 < λ λ n < λ n r n μ < ( σ q κ q ) 1 q 1 ; 0 < ζ i < ( σ i q κ q ) 1 q 1 , i = 1 , 2 . Suppose n = 0 sup x D S n + 1 x S n x < , where D is a bounded set in set C. Define the mapping S by S x = lim n S n x for all x C . Then the sequence { x n } converges to x * Ω strongly. The solution also uniquely solves x * u , J ( x * x ¯ ) 0 for all x ¯ Ω .
Proof. 
First, we set v n = J ζ 2 B 2 ( w n ζ 2 A 2 w n ) , x * Ω and y * = J ζ 2 B 2 ( x * ζ 2 A 2 x * ) . Since u n = J ζ 1 B 1 ( I ζ 1 A 1 ) v n and v n = J ζ 2 B 2 ( I ζ 2 A 2 ) w n , we have u n = G w n . Using Lemma 2 yields that
v n y * q ζ 2 ( κ q ζ 2 q 1 σ 2 q ) A 2 w n A 2 x * q + w n x * q
and
u n x * q ζ 1 ( κ q ζ 1 q 1 σ 1 q ) A 1 v n A 1 y * q + v n y * q .
Combining the last two inequalities, we have
u n x * q ζ 2 ( κ q ζ 2 q 1 σ 2 q ) A 2 w n A 2 x * q + ζ 1 ( κ q ζ 1 q 1 σ 1 q ) A 1 v n A 1 y * q + w n x * q .
Using (1) and Lemma 4, we obtain that
x n + 1 x * q q α n u x * , J q ( x n + 1 x * ) + β n ( u n x * ) + γ n ( S α , n z n x * ) q β n x * u n q + γ n [ u n x * q r n λ n ( σ q κ q λ n q 1 ) A u n A x * q λ n ( σ q κ q λ n q 1 r n q 1 ) A y n A x * q ] W q g ( u n S α , n z n ) + q α n u x * , J q ( x n + 1 x * ) ( 1 α n ) x n x * q γ n [ ζ 2 ( σ 2 q κ q ζ 2 q 1 ) A 2 w n A 2 x * q + ζ 1 ( σ 1 q κ q ζ 1 q 1 ) A 1 v n A 1 y * q + r n λ n ( σ q κ q λ n q 1 ) A u n A x * q + λ n ( σ q κ q λ n q 1 r n q 1 ) A x * A y n q ] W q g ( u n S α , n z n ) + α n q u x * , J q ( x n + 1 x * ) .
Put
Γ n = x n x * q , η n = γ n [ ζ 2 ( σ 2 q κ q ζ 2 q 1 ) A 2 w n A 2 x * q + ζ 1 ( σ 1 q κ q ζ 1 q 1 ) A 1 v n A 1 y * q λ n r n ( κ q λ n q 1 q σ ) A u n A x * q + ( σ q κ q λ n q 1 r n q 1 ) λ n A y n A x * q ] W q g ( u n S α , n z n ) δ n = u x * , J q ( x n + 1 x * ) q α n .
Then (3) can be rewritten as the following formula:
Γ n + 1 + η n ( 1 α n ) Γ n + δ n .
 □
We next give two possible cases.
Case 1. We assume that there is an integer n 0 1 with the restriction that { Γ n } is non-increasing. From (4), we get η n Γ n α n Γ n Γ n + 1 + δ n . From lim n α n = lim n δ n = 0 , one sees that lim n η n = 0 . It is easy to see from Lemma 4 that lim n g ( u n S α , n z n ) = 0 ,
lim n A 2 w n A 2 x * = lim n A 1 v n A 1 y * = 0
and
lim n A u n A x * = lim n A y n A x * = 0 .
From the fact that g is a strictly increasing, continuous and convex function with g ( 0 ) = 0 , one has
lim n u n S α , n z n = 0 .
By use of Lemma 5, we get
v n y * q w n ζ 2 A 2 w n ( x * ζ 2 A 2 x * ) , J q ( v n y * ) 1 q [ w n x * q + ( q 1 ) v n y * q h ˜ 1 ( w n x * v n + y * ) ] + ζ 2 A 2 x * A 2 w n , J q ( v n y * ) ,
where h ˜ 1 is a convex, strictly increasing, continuous function as in Lemma 5. This hence entails
v n y * q w n x * q h ˜ 1 ( w n v n x * + y * ) + q ζ 2 A 2 x * A 2 w n v n y * q 1 .
In a similar way, one concludes
u n x * q v n ζ 1 A 1 v n ( y * ζ 1 A 1 y * ) , J q ( u n x * ) 1 q [ v n y * q + ( q 1 ) u n x * q h ˜ 2 ( v n y * u n + x * ) ] + ζ 1 A 1 y * A 1 v n , J q ( u n x * ) ,
which hence entails
u n x * q v n y * q h ˜ 2 ( v n y * u n + x * ) + q ζ 1 A 1 y * A 1 v n u n x * q 1 x n x * q h ˜ 1 ( w n v n x * + y * ) + q ζ 2 A 2 x * A 2 w n v n y * q 1 h ˜ 2 ( v n u n + x * y * ) + q ζ 1 A 1 y * A 1 v n u n x * q 1 .
Note that
q y n x * q q ( u n λ n A u n ) x * + λ n A x * ) , J q ( y n x * ) ( u n λ n A u n ) x * + λ n A x * ) q h 1 ( u n y n λ n ( A u n A x * ) ) + ( q 1 ) y n x * q ,
which leads us to
y n x * q u n x * q h 1 ( u n y n λ n ( A u n A x * ) ) .
From (8), one has
x n + 1 x * q γ n { ( 1 r n ) u n x * q + r n [ u n x * q + α n x * u q h 1 ( u n λ n ( A u n A x * ) y n ) ] } + β n u n x * q β n x n x * q + α n x * u q + γ n { x n x * q h ˜ 1 ( w n v n x * + y * ) h ˜ 2 ( v n u n + x * y * ) + q ζ 1 A 1 y * A 1 v n u n x * q 1 + q ζ 2 A 2 x * A 2 w n v n y * q 1 r n h 1 ( u n λ n ( A u n A x * ) y n ) } x n x * q + α n x * u q γ n { h ˜ 1 ( w n v n x * + y * ) + h ˜ 2 ( v n u n + x * y * ) + r n h 1 ( u n λ n ( A u n A x * ) y n ) } + q ζ 1 A 1 y * A 1 v n u n x * q 1 + q ζ 2 A 2 x * A 2 w n v n y * q 1 ,
which immediately yields that
γ n { h ˜ 1 ( w n v n x * + y * ) + h ˜ 2 ( v n u n + x * y * ) + r n h 1 ( u n λ n ( A u n A x * ) y n ) } α n u x * q + Γ n Γ n + 1 + q ζ 1 A 1 y * A 1 v n u n x * q 1 + q ζ 2 A 2 x * A 2 w n v n y * q 1 .
Since h ˜ 1 ( 0 ) = h ˜ 2 ( 0 ) = h 1 ( 0 ) = 0 and the fact that h ˜ 1 , h ˜ 2 and h 1 are strictly increasing, continuous and convex functions, one concludes that lim n w n v n x * + y * = lim n v n u n + x * y * = lim n u n y n . This immediately implies that
lim n w n u n = lim n u n y n = 0 .
Furthermore, borrowing w n = s n ( ( 1 ξ ) x n + ξ G w n ) + ( 1 s n ) S α , n x n defined as the first step in the iteration procedure, we obtain that
w n x * q = s n ( ( 1 ξ ) x n + ξ G w n ) + ( 1 s n ) S α , n x n x * , J q ( w n x * ) = s n [ ( 1 ξ ) x n x * , J q ( w n x * ) + ξ G w n x * , J q ( w n x * ) ] + ( 1 s n ) S α , n x n x * , J q ( w n x * ) s n [ ( 1 ξ ) x n x * , J q ( w n x * ) + ξ w n x * q ] + ( 1 s n ) S α , n x n x * , J q ( w n x * ) ,
which yields
w n x * q 1 1 s n ξ [ s n ( 1 ξ ) x n x * , J q ( w n x * ) + ( 1 s n ) S α , n x n x * , J q ( w n x * ) ] 1 q [ x n x * q + ( q 1 ) w n x * q ] 1 1 s n ξ [ s n ( 1 ξ ) q h 3 ( x n w n ) + 1 s n q h ˜ 3 ( S α , n x n w n ) ] .
This further implies that
u n x * q w n x * q x n x * q 1 1 s n ξ [ s n ( 1 ξ ) h 3 ( x n w n ) + ( 1 s n ) h ˜ 3 ( S α , n x n w n ) ] .
In a similar way, one further concludes
z n x * q ( x * λ n A x * ) ( u n λ n A y n + r n ( y n u n ) ) q h 2 ( u n + ( r n y n r n u n ) ( λ n A y n λ n A x * ) z n ) u n x * q h 2 ( u n + ( r n y n r n u n ) ( λ n A y n λ n A x * ) z n ) .
Using (10) leads us to
x n + 1 x * q γ n [ u n x * q + β n u n x * q h 2 ( u n + ( r n y n r n u n ) ( λ n A y n λ n A x * ) z n ) ] + α n x * u q x n x * q γ n { 1 1 s n ξ [ s n ( 1 ξ ) h 3 ( x n w n ) + ( 1 s n ) h ˜ 3 ( S α , n x n w n ) ] + h 2 ( u n + ( r n y n r n u n ) ( λ n A y n λ n A x * ) z n ) } + α n x * u q .
Hence,
γ n { 1 1 s n ξ [ s n ( 1 ξ ) h 3 ( x n w n ) + ( 1 s n ) h ˜ 3 ( S α , n x n w n ) ] + h 2 ( u n + ( r n y n r n u n ) ( λ n A y n λ n A x * ) z n ) } Γ n + α n u x * q Γ n + 1 .
Note that h 2 ( 0 ) = h 3 ( 0 ) = h ˜ 3 ( 0 ) = 0 and the fact that h 2 , h 3 and h ˜ 3 are strictly increasing, continuous and convex functions. From (6) and (9) we have
lim n x n w n = lim n S α , n x n w n = lim n u n z n = 0 .
By using (9) and (11), one concludes lim n x n u n = lim n x n z n = 0 . It follows that
x n G x n x n u n + u n G x n x n u n + w n x n 0 ( n ) .
Thanks to (11), we get lim n S α , n x n x n = 0 , which, together with S α , n x n x n = α ( S n x n x n ) , leads to
S n x n x n = 1 α S α , n x n x n 0 ( n ) .
From the boundedness of { x n } and setting D = conv ¯ { x n : n 0 } , we have n = 1 sup x D S n x S n 1 x < . Lemma 6 yields that lim n sup x D S n x S x = 0 . So, lim n S n x n S x n = 0 . Further, from (13), we have
x n S x n x n S n x n + S n x n S x n 0 ( n ) .
Letting S α x : = ( 1 α ) x + α S x x C , we deduce from Lemma 1 that S α : C C is a nonexpansive mapping. It is easy to see from (14) that lim n x n S α x n = 0 . For each n 0 , set T λ n : = J λ n B ( I λ n A ) . It follows that
x n T λ n x n x n u n + u n T λ n u n + T λ n u n T λ n x n 2 x n u n + u n y n 0 ( n ) .
In light of 0 < λ λ n for all n 0 , we obtain
T λ x n x n 2 T λ n x n x n 0 ( n ) .
We define a mapping Ψ : C C by Ψ x : = θ 1 S α x + θ 2 G x + ( 1 θ 1 θ 2 ) T λ x x C with θ 1 + θ 2 < 1 for constants θ 1 , θ 2 ( 0 , 1 ) . Lemma 7 guarantees that Ψ is nonexpansive and
Fix ( Ψ ) = Fix ( S α ) Fix ( G ) Fix ( T λ ) = n = 0 Fix ( S n ) Fix ( G ) ( A + B ) 1 0 ( = : Ω ) .
Taking into account that
Ψ x n x n θ 1 S α x n x n + θ 2 G x n x n + ( 1 θ 1 θ 2 ) T λ x n x n ,
we deduce from (12) and (15) that
lim n Ψ x n x n = 0 .
Let z λ = λ u + ( 1 λ ) Ψ z λ , λ ( 0 , 1 ) . Lemma 8 guarantees that { z λ } converges to a point x * Fix ( Ψ ) = Ω in norm, and x * further solves the VIP: x * u , J ( x * p ) 0 , p Ω . From (1), we have
z λ x n q λ q z λ x n q + ( 1 λ ) q ( Ψ z λ Ψ x n + Ψ x n x n ) q + λ q u z λ , J q ( z λ x n ) λ q z λ x n q + ( 1 λ ) q ( Ψ x n x n + z λ x n ) q + λ q u z λ , J q ( z λ x n ) .
Further, from (16), one has
lim sup n u z t , J q ( x n z t ) M ( q t 1 ) + ( 1 t ) q q t ,
where M is a constant such that z t x n q M for all n 0 and t ( 0 , 1 ) . From the properties of J q and the fact that z t x * as t 0 , one gets lim t 0 J q ( x n x * ) J q ( x n z t ) = 0 . A simple calculation indicates that
lim sup n u x * , J q ( x n x * ) 0
and then
x n + 1 x n α n u x n + β n u n x n + γ n ( S α , n z n u n + u n x n ) α n u x n + u n x n + S α , n z n u n 0 ( n ) .
Using (17), we have lim sup n u x * , J q ( x n + 1 x * ) 0 . An application of Lemma 9 yields that Γ n 0 as n . Thus, x n x * as n .
Case 2. We assume that there is { Γ k i } { Γ k } s.t. Γ k i < Γ k i + 1 i N , where N is the set of all positive integers. We now give a new mapping τ : N N by τ ( k ) : = max { i k : Γ i < Γ i + 1 } . Using Lemma 10, one concludes
Γ τ ( k ) + 1 Γ τ ( k ) and Γ τ ( k ) + 1 Γ k .
Putting Γ k = x k x * q k N and using the same reasoning as in Case 1 we can obtain
lim k x τ ( k ) x τ ( k ) + 1 = 0
lim sup k u x * , J q ( x τ ( k ) + 1 x * ) 0 .
In view of α τ ( k ) > 0 and Γ τ ( k ) + 1 Γ τ ( k ) , we conclude that
q 1 δ u x * , J q ( x τ ( k ) + 1 x * ) x τ ( k ) x * q .
Consequently, lim k x τ ( k ) x * q = 0 . Using Lemma 3, we have that
x τ ( k ) + 1 x * q x * x τ ( k ) q q x τ ( k ) + 1 x τ ( k ) , J q ( x τ ( k ) x * ) + κ q x τ ( k ) + 1 x τ ( k ) q q x τ ( k ) x * q 1 x τ ( k ) + 1 x τ ( k ) + κ q x τ ( k ) + 1 x τ ( k ) q 0 ( k ) .
Thanks to Γ k Γ τ ( k ) + 1 , we get
x k x * q x τ ( k ) x * q + q x τ ( k ) + 1 x τ ( k ) x τ ( k ) x * q 1 + κ q x τ ( k ) + 1 x τ ( k ) q .
It is easy to see from (18) that x k x * as k . This completes the proof.
It is well known that κ 2 = 1 in Hilbert spaces. From Theorem 1, we derive the following conclusion.
Corollary 1.
Let C H be a closed convex set. Let { S n } n = 0 be a family of ς-uniformly strict pseudocontraction mappings defined on C. Suppose that B 1 , B 2 : C 2 H are both maximal monotone operators and A k : C H is σ k -inverse-strongly monotone mapping for k = 1 , 2 . Define the mapping G : C C by G : = J ζ 1 B 1 ( I ζ 1 A 1 ) J ζ 2 B 2 ( I ζ 2 A 2 ) for constants ζ 1 , ζ 2 > 0 . Let A : C H and B : C 2 H be a σ-inverse-strongly monotone mapping and a maximal monotone operator, respectively. For any given x 0 C , ξ ( 0 , 1 ) and α ( 0 , min { 1 , 2 ς } ) , let { x n } n = 0 be the sequence generated by
w n = s n ( ( 1 ξ ) x n + ξ G w n ) + ( 1 s n ) ( ( 1 α ) x n + α S n x n ) , u n = J ζ 1 B 1 ( I ζ 1 A 1 ) J ζ 2 B 2 ( w n ζ 2 A 2 w n ) , y n = J λ n B ( u n λ n A u n ) , z n = J λ n B ( u n λ n A y n + r n ( y n u n ) ) , x n + 1 = α n u + β n u n + γ n ( ( 1 α ) z n + α S n z n ) n 0 ,
where the sequences { r n } , { s n } , { α n } , { β n } , { γ n } are sequences in ( 0 , 1 ] with the additional restrictions α n + β n + γ n = 1 and { λ n } ( 0 , ) are such that n = 0 α n = , lim n α n = 0 as n ; 0 < a β n b < 1 and 0 < c s n d < 1 ; 0 < r r n < 1 and 0 < λ λ n < λ n r n μ < 2 σ ; 0 < ζ k < 2 σ k for k = 1 , 2 . Assume that n = 0 sup x D S n + 1 x S n x < , where D is a bounded subset of C. Define a self mapping S by S x = lim n S n x x C , and further assume that n = 0 Fix ( S n ) = Fix ( S ) . If Ω : = n = 0 Fix ( S n ) Fix ( G ) ( A + B ) 1 0 , then lim n x n x * = 0 , where x * Ω uniquely solves x * u , p x * 0 p Ω .
Next, we recall the least absolute shrinkage and selection operator (LASSO) [35], which can be formulated as a convex constrained optimization problem:
min y H 1 2 T y b 2 2 subject to y 1 s ,
where T is a bounded operator on H, b is a fixed vector in H, and s > 0 is a real number. In this section, Λ is employed to denote the set of solutions of LASSO (20). LASSO, which acts as a unified model for a number of real problems, has been investigated in different settings. Ones know that a solution to (20) is a minimizer to the following minimization problem: min y H g ( y ) + h ( y ) , where g ( y ) : = 1 2 T y b 2 2 , h ( y ) : = λ y 1 . It is known that g ( y ) = T * ( T y b ) is 1 T * T -inverse-strongly monotone. Hence, we have that z solves the LASSO iff z solves the problem, which consists of finding z H s.t.
0 h ( z ) + g ( z ) z λ g ( z ) z + λ h ( z ) z = prox h ( z λ g ( z ) ) ,
where λ > 0 is real, and prox h ( y ) is the proximal of h ( y ) : = λ y 1 defined as follows
prox h ( y ) = argmin u H { λ u 1 + 1 2 u y 2 2 } y H .
This is separable in indices. So, y H , for i = 1 , 2 , , n , prox h ( y ) = prox λ · 1 ( y ) = ( prox λ | · | ( y 1 ) , prox λ | · | ( y 2 ) , , prox λ | · | ( y n ) ) , with prox λ | · | ( y i ) = sgn ( y i ) max { | y i | λ , 0 } .
By putting C = H , A = g , B = h and σ = 1 T * T in Corollary 1, we obtain the following result immediately.
Corollary 2.
Let A k , B k ( k = 1 , 2 ) and { S n } n = 0 be the same as in Corollary 1 with C = H . Assume that Ω : = n = 0 Fix ( S n ) Fix ( G ) Λ . For any given x 0 H , ξ ( 0 , 1 ) and α ( 0 , min { 1 , 2 ς } ) , let { x n } n = 0 be the sequence generated by
w n = s n ( ( 1 ξ ) x n + ξ G w n ) + ( 1 s n ) ( ( 1 α ) x n + α S n x n ) , u n = J ζ 1 B 1 ( I ζ 1 A 1 ) J ζ 2 B 2 ( w n ζ 2 A 2 w n ) , y n = prox h ( u n λ n T * ( T u n b ) ) , z n = prox h ( u n λ n T * ( T y n b ) + ( r n y n r n u n ) ) , x n + 1 = α n u + ( 1 α ) γ n z n + α γ n S n z n + β n u n n 0 ,
where the sequences { r n } , { s n } , { α n } , { β n } , { γ n } ( 0 , 1 ] , α n + β n + γ n = 1 and { λ n } ( 0 , ) are such that the conditions presented in Corollary 3.1 hold where σ = 1 T * T . Then x n x * Ω as n .

Funding

This work was supported by Gyeongnam National University of Science and Technology Grant in 2020.3.1–2022.2.28.

Acknowledgments

The author thanks to the referees for useful suggestions which improved the presentation of this manuscript.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Qin, X.; An, N.T. Smoothing algorithms for computing the projection onto a Minkowski sum of convex sets. Comput. Optim. Appl. 2019, 74, 821–850. [Google Scholar] [CrossRef] [Green Version]
  2. Sahu, D.R.; Yao, J.C.; Verma, M.; Shukla, K.K. Convergence rate analysis of proximal gradient methods with applications to composite minimization problems. Optimization 2020. [Google Scholar] [CrossRef]
  3. Nguyen, L.V.; Qin, X. The Minimal time function associated with a collection of sets. ESAIM Control Optim. Calc. Var. 2020. [Google Scholar] [CrossRef]
  4. Ansari, Q.H.; Islam, M.; Yao, J.C. Nonsmooth variational inequalities on Hadamard manifolds. Appl. Anal. 2020, 99, 340–358. [Google Scholar] [CrossRef]
  5. An, N.T. Solving k-center problems involving sets based on optimization techniques. J. Global Optim. 2020, 76, 189–209. [Google Scholar] [CrossRef]
  6. Nguyen, L.V.; Qnsari, Q.H.; Qin, X. Weak sharpness and finite convergence for solutions of nonsmooth variational inequalities in Hilbert spaces. Appl. Math. Optim. 2020. [Google Scholar] [CrossRef]
  7. Nguyen, L.V.; Ansair, Q.H.; Qin, X. Linear conditioning, weak sharpness and finite convergence for equilibrium problems. J. Global Optim. 2020, 77, 405–424. [Google Scholar] [CrossRef]
  8. Korpelevich, G.M. The extragradient method for finding saddle points and other problems. Ekon. i Mat. Metod. 1976, 12, 747–756. [Google Scholar]
  9. Dehaish, B.A.B. Weak and strong convergence of algorithms for the sum of two accretive operators with applications. J. Nonlinear Convex Anal. 2015, 16, 1321–1336. [Google Scholar]
  10. Takahahsi, W.; Yao, J.C. The split common fixed point problem for two finite families of nonlinear mappings in Hilbert spaces. J. Nonlinear Convex Anal. 2019, 20, 173–195. [Google Scholar]
  11. Qin, X.; Wang, L.; Yao, J.C. Inertial splitting method for maximal monotone mappings. J. Nonlinear Convex Anal. 2020, in press. [Google Scholar]
  12. Takahashi, W.; Wen, C.F.; Yao, J.C. The shrinking projection method for a finite family of demimetric mappings with variational inequality problems in a Hilbert space. Fixed Point Theory 2018, 19, 407–419. [Google Scholar] [CrossRef]
  13. Qin, X.; Yao, J.C. A viscosity iterative method for a split feasibility problem. J. Nonlinear Convex Anal. 2019, 20, 1497–1506. [Google Scholar]
  14. Liu, L. A hybrid steepest descent method for solving split feasibility problems involving nonexpansive mappings. J. Nonlinear Convex Anal. 2019, 20, 471–488. [Google Scholar]
  15. Liu, L.; Qin, X.; Agarwal, R.P. Iterative methods for fixed points and zero points of nonlinear mappings with applications. Optimization 2019. [Google Scholar] [CrossRef]
  16. Chang, S.S. Common zero point for a finite family of inclusion problems of accretive mappings in Banach spaces. Optimization 2018, 67, 1183–1196. [Google Scholar] [CrossRef]
  17. Qin, X.; Yao, J.C. Weak convergence of a Mann-like algorithm for nonexpansive and accretive operators. J. Inequal. Appl. 2016, 2016, 232. [Google Scholar] [CrossRef] [Green Version]
  18. Abbas, H.A.; Aremu, K.O.; Jolaoso, L.O.; Mewomo, O.T. An inertial forward-backward splitting method for approximating solutions of certain optimization problem. J. Nonlinear Funct. Anal. 2020, 2020, 6. [Google Scholar]
  19. Kimura, Y.; Nakajo, K. Strong convergence for a modified forward-backward splitting method in Banach spaces. J. Nonlinear Var. Anal. 2019, 3, 5–18. [Google Scholar]
  20. Ceng, L.C. Asymptotic inertial subgradient extragradient approach for pseudomonotone variational inequalities with fixed point constraints of asymptotically nonexpansive mappings. Commun. Optim. Theory 2020, 2020, 2. [Google Scholar]
  21. Gibali, A. Polyak’s gradient method for solving the split convex feasibility problem and its applications. J. Appl. Numer. Optim. 2019, 1, 145–156. [Google Scholar]
  22. Tong, M.Y.; Tian, M. Strong convergence of the Tseng extragradient method for solving variational inequalities. Appl. Set-Valued Anal. Optim. 2020, 2, 19–33. [Google Scholar]
  23. Barbu, V. Nonlinear Semigroups and Differential Equations in Banach Spaces; Noordhoff: Amsterdam, The Netherlands, 1976. [Google Scholar]
  24. Ceng, L.C.; Postolache, M.; Yao, Y. Iterative algorithms for a system of variational inclusions in Banach spaces. Symmetry 2019, 11, 811. [Google Scholar] [CrossRef] [Green Version]
  25. Aoyama, K.; Iiduka, H.; Takahashi, W. Weak convergence of an iterative sequence for accretive operators in Banach spaces. Fixed Point Theory Appl. 2006, 2006, 35390. [Google Scholar] [CrossRef] [Green Version]
  26. Xu, H.K. Inequalities in Banach spaces with applications. Nonlinear Anal. 1991, 16, 1127–1138. [Google Scholar] [CrossRef]
  27. Zhang, H.; Su, Y. Convergence theorems for strict pseudocontractions in q-uniformly smooth Banach spaces. Nonlinear Anal. 2009, 71, 4572–4580. [Google Scholar] [CrossRef]
  28. Lopez, G.; Martin-Marquez, V.; Wang, F.; Xu, H.K. Forward-backward splitting methods for accretive operators in Banach spaces. Abst. Anal. Appl. 2012, 2012, 109236. [Google Scholar] [CrossRef] [Green Version]
  29. Qin, X.; Cho, S.Y.; Yao, J.C. Weak and strong convergence of splitting algorithms in Banach spaces. Optimization 2020, 69, 243–267. [Google Scholar] [CrossRef]
  30. Aoyama, K.; Kimura, Y.; Takahashi, W.; Toyoda, M. Approximation of common fixed points of a countable family of nonexpansive mappings in a Banach space. Nonlinear Anal. 2007, 67, 2350–2360. [Google Scholar] [CrossRef]
  31. Bruck, R.E. Properties of fixed-point sets of nonexpansive mappings in Banach spaces. Trans. Am. Math. Soc. 1973, 179, 51–262. [Google Scholar] [CrossRef]
  32. Reich, S. Strong convergence theorems for resolvents of accretive operators in Banach spaces. J. Math. Anal. Appl. 1980, 75, 287–292. [Google Scholar] [CrossRef] [Green Version]
  33. Xue, Z.; Zhou, H.; Cho, Y.J. Iterative solutions of nonlinear equations for m-accretive operators in Banach spaces. J. Nonlinear Convex Anal. 2000, 1, 313–320. [Google Scholar]
  34. Maingé, P.E. Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16, 899–912. [Google Scholar] [CrossRef]
  35. Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B 1996, 58, 267–288. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Cho, S.Y. Implicit Extragradient-Like Method for Fixed Point Problems and Variational Inclusion Problems in a Banach Space. Symmetry 2020, 12, 998. https://doi.org/10.3390/sym12060998

AMA Style

Cho SY. Implicit Extragradient-Like Method for Fixed Point Problems and Variational Inclusion Problems in a Banach Space. Symmetry. 2020; 12(6):998. https://doi.org/10.3390/sym12060998

Chicago/Turabian Style

Cho, Sun Young. 2020. "Implicit Extragradient-Like Method for Fixed Point Problems and Variational Inclusion Problems in a Banach Space" Symmetry 12, no. 6: 998. https://doi.org/10.3390/sym12060998

APA Style

Cho, S. Y. (2020). Implicit Extragradient-Like Method for Fixed Point Problems and Variational Inclusion Problems in a Banach Space. Symmetry, 12(6), 998. https://doi.org/10.3390/sym12060998

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop