1. Introduction
Many applications in science and engineering have shown a huge interest in solving an inverse problem of finding
satisfying
where
is the observed data and
is the corresponding nonzero matrix. Actually, the inverse problem is typically facing the ill-condition of the matrix
B so that it may have no solution. Then, an approach for finding an approximate solution by minimizing the squared norm of the residual term has been considered:
Observe that the problem (
2) has several optimal solutions; in this situation, it is not clear which of these solutions should be considered. One strategy for pursuing the best optimal solution among these many solutions is to add a regularization term to the objective function. The classical technique is to consider the celebrated Tikhonov regularization [
1] of the form
where
is a regularization parameter. In this setting, the uniqueness of the solution to (
3) is acquired. However, note that from a practical point of view, the shortcoming of this strategy is that the unique solution to the regularization problem (
3) may probably not optimal in the original sense as in (
2), see [
2,
3] for further discussions.
To over come this, we should consider the strategy of selecting a specific solution among optimal solutions to (
2) by minimizing an additional prior function over these optimal solutions. This brings the framework of the following bilevel optimization problem,
where
and
are convex functions, and
is a nonzero linear transformation. It is very important to point out that many problems can be formulated into this form. For instance, if
,
,
, and
the problem (
4) becomes the fused lasso [
4] solution to the problem (
2). This situation also occurs in image denoising problems (
and
B is the identity matrix), and in image inpainting problems (
and
B is a symmetric diagonal matrix), where the term
is known as an 1D total variation [
5]. When
,
,
, and
A is the identity matrix, the problem (
4) becomes the elastic net [
6] solution to the problem (
2). Moreover, in wavelet-based image restoration problems, the matrix
A is given by an inverse wavelet transform [
7].
Let us consider the constrained set of (
4), it is known that introducing the
Landweber operator of the form
yields
T is firmly nonexpansive and the set of all fixed points of
T is nothing else that the set
, see [
8] for more details. Motivated by this observation, the bilevel problem (
4) can be considered in the general setting as
where
is a nonlinear operator with
. Note that problem (
5) encompasses not only the problem (
4), but also many problems in the literature, for instance, the minimization over the intersection of a finite number of sublevel sets of convex nonsmooth functions (see
Section 5.2), the minimization over the intersection of many convex sets in which the metric projection on such intersection can not be computed explicitly, see [
9,
10,
11] for more details.
There are some existing methods for solving convex optimization problems over fixed-point set in the form of (
5), but the celebrated one is due to the
hybrid steepest descent method, which was firstly investigated in [
12]. Note that the algorithm proposed by Yamada [
12] goes on with the hypotheses that the objective functions are strongly convex and smooth and the operator
T is nonexpansive. Several variants and generalizations of this well-known method are, for instance, Yamada and Ogura [
11] considered the same scheme for solving the problem (
5) when
T belongs to a class of so called quasi-shrinking operator. Cegielski [
10] proposed a generalized hybrid steepest descent method by using the sequence of quasi-nonexpansive operators. Iiduka [
13,
14] considered a nonsmooth convex optimization problem (
5) with fixed-point constraints of certain quasi-nonexpansive operators.
On the other hand, in the recent decade, the split common fixed point problem [
15,
16] turns out to be one of the attractions among several nonlinear problems due to its widely applications in many image and signal processing problems. Actually, for given a nonzero linear transformation
, and two nonlinear operators
and
with
, and
, the split common fixed point problem is to find a point
in which
The key idea of this problem is to find a point in the fixed point of a nonlinear operator in a primal space, and subsequently its image under an appropriate linear transformation forms a fixed point of another nonlinear operator in another space. This situation appears, for instance, in dynamic emission tomographic image reconstruction [
17] and in the intensity-modulated radiation therapy treatment planning, see [
18] for more details. Of course, there are many authors that have investigated the study of iterative algorithms for split common fixed point problems and proposed their generalizations in several aspects, see, for example, [
9,
19,
20,
21,
22] and references therein.
The aim of this paper is to present a nonsmooth and non-strongly convex version of the hybrid steepest descent method for minimizing the sum of two convex functions over the fixed-point constraints of the form:
where
and
are convex nonsmooth functions,
is a nonzero linear transformation,
and
are certain quasi-nonexpansive operators with
, and
, and
is a simple closed convex bounded set. We prove the convergence of function value to the minimum value where some control conditions on a stepsize sequence and a parameter are imposed.
The paper is organized as follows. After recalling and introducing some useful notions and tools in
Section 2, we present our algorithm and discuss the convergence analysis in
Section 3. Furthermore, in
Section 4, we discuss an important implication of our problem and algorithm to the minimizing sum of convex functions over coupling constraints. In
Section 5, we discuss in detail some remarkably practical applications, and
Section 6 describes the results of numerical experiments on fused lasso like problem. Finally, the conclusions are given in
Section 7.
2. Preliminaries
We summarize some useful notations, definitions, and properties, which we will utilize later.
For further details, the reader can consult the well-known books, for instance, in [
8,
23,
24,
25].
Let be an n-dimensional Euclidean space with an inner product and the corresponding norm .
Let
be an operator. We denote the set of all fixed points of
T by
, that is,
We say that
T is
-
strongly quasi-nonexpansive (
-SQNE), where
, if
and
for all
and
. If
, then
T is called
strongly quasi-nonexpansive (SQNE). If
, then
T is called
quasi-nonexpansive (QNE), that is,
for all
and
. Clearly, if
T is SQNE, then it is QNE. We say that
T is
cutter if
and
for all
and all
. We say that
T is
firmly nonexpansive (FNE) if
for all
.
The following properties will be applied in the next sections.
Fact 1 (Lemma 2.1.21 [
8])
. If is , then is closed and convex. Fact 2 ([Theorem 2.2.5 [
8])
. If is with , then T is a cutter. Let
be a function and
. The
subdifferential of
f at
x is the set
If
, then an element
is called a
subgradient of
f at
x.
Fact 3 (Corollary 16.15 [
24])
. Let be a convex function. Then, the subdifferential for all . Fact 4 (Proposition 16.17 [
24])
. Let be a convex function. Then, the subdifferential maps every bounded subset of to a bounded set. As we work on the n-dimensional Euclidean space, we will use the notion of matrix instead of the notion of linear transformation throughout this work. Denote by the set of all real-valued matrices. Let be given. We denote by its range, and its (conjugate) transpose. We denote the induced norm of A by , which is given by , where is the maximum eigenvalue of the matrix .
3. Method and its Convergence
Now, we formulate the composite nonsmooth convex minimization problem over the intersections of fixed-point sets which we aim to investigate throughout this paper.
Problem 1. Let and be two Euclidean spaces. Assume that
and are convex functions.
is a nonzero matrix.
is , and is cutter with .
X is a nonempty convex closed bounded simple subset of .
Our objective is to solve
Throughout this work, we denote the solution set of Problem 1 by and assume that it is nonempty.
Problem 1 can be viewed as a bilevel problem in which data given from two sources in a system. Actually, let us consider the system of two users in different sources (they can have differently a number of factors n and m) in which they can communicate to each other via the transformation A. The first user aims to find the best solutions with respect to criterion f among many feasible points represented in the form of fixed point set of an appropriate operator T. Similarly, the second user has its own objective in the same fashion of finding the best solutions among feasible points in with priori criterion h. Now, to find the best solutions of this system, we consider the fixed-point subgradient splitting method (in short, FSSM) as follows, see Algorithm 1.
Algorithm 1: Fixed-Point Subgradient Splitting Method. |
Initialization: The positive sequence and the parameter , and an arbitrary . Iterative Step: For given , compute |
Remark 1. Actually, this algorithm has simultaneously the following features; (i) splitting computation, (ii) simple scheme, and (iii) boundedness of iterates. Concerning the first feature, we present the iterative scheme by allowing the process of a subgradient of f and a use of operator T in the space , and a subgradient of h and a use of operator S in the space separately. Regarding the simplicity of iterative scheme, we need not to compute the inverse of the matrix A; in this case, the transpose of A is enough. Finally, the third feature is typically required when performing the convergence of subgradient type method. Of course, the boundedness is often considered in image processing and machine learning in the form of a (big) box constraint or a big Euclidean ball.
To study the convergence properties of a function values of a sequence generated by Algorithm 1, we start with the following technical result.
Lemma 1. Let be a sequence generated by Algorithm 1. Then, for every and , it holds that Proof. Let
be arbitrary. By the definition of
, we have
Now, by using the definition of
and the cutter property of
S, we derive
which in turn implies that (
6) becomes
We now focus on the last two terms of the right-hand side of (
7).
Observe that
thus, by the definition of
, we obtain
Now, inequalities (
6)–(
8) together give
On the other hand, using the definition of
and the assumption that
T is QNE, we obtain
Replacing (
9) in (
10), we obtain
Next, the convexities of
f and
g give
and
By making use of these two inequalities in (
11), we obtain
which is the required inequality and the proof is completed. ☐
The following lemma is very useful for the convergence result.
Lemma 2. Let be a sequence generated by Algorithm 1. Then, is bounded. Furthermore, if and is bounded, then the sequences , , , , , and are bounded.
Proof. As
X is a bounded set, it is clear that the sequence
is bounded. Now, let
be given. The linearity of
A and quasi-nonexpansiveness of
S yield
This implies that
is bounded. Consequently, applying Fact 4, we obtain that
is also bounded.
By the triangle inequality, we have
Therefore, the boundedness of
implies that
is bounded. Consequently, the triangle inequality and the linearity of
yields the boundedness of
. As
T is QNE, we have
is bounded. Thus,
is bounded by Fact 4. ☐
For the sake of simplicity, we let
and assume that
.
We consider a convergence property in objective values with diminishing stepsize as the following theorem.
Theorem 1. Let be a sequence generated by Algorithm 1. If the following control conditions hold,
- (i)
;
- (ii)
and ;
Proof. Let
be given. We note from Lemma 1 that for every
this is true because
via the assumption (i). Summing up (
12) for
we obtain that
where
This implies that
Next, we show that
. By supposing a contradiction that
there exist
and
such that
for all
. Thus, we have
which is a contradiction. Therefore, we can conclude that
☐
Remark 2. The convergence results obtained in Theorem 1 are slightly different from the convergence results obtained by the classical gradient method or even projected gradient method, namely, . This is because, in each iteration, we can not ensure whether the estimate is belonging to the constrained set or not, this means that the property may not be true in general. Similarly, we can not ensure that .
Remark 3. A step size sequence satisfies the assumption that and is, for instance, where .
4. Convex Minimization Involving Sum of Composite Functions
The aim of this section is to show that Algorithm 1 and their convergence properties can be employed when solving a convex minimization involving sum of a finite number of composite functions.
Let us take a look the composite convex minimization problem:
where, we assume further that, for all
, there hold
- (I)
] are nonzero matrices,
- (II)
are cutter operators with , and
- (III)
is a convex function.
In this section, we assume that the solution set of (
13) is denoted by
and asuume that it is nonempty.
Denote the product of spaces
equipped with the addition
the scalar multiplication
with the inner product defined by
and the norm by
for all
,
, is again a Euclidean space (see [
24], Example 2.1). Define a matrix
by
and an operator
by
for all
. Note that the operator
is cutter with
Furthermore, defining a function
by
for all
, we also have that the function
is a convex function (see [
24], Proposition 8.25). By hte above setting, we can rewrite the problem (
13) as
which is nothing else than Problem 1.
Here, to investigate the solving of the problem (
13), we state the Algorithm 2 as follow.
Algorithm 2: Distributed Fixed-Point Subgradient Splitting Method. |
Initialization: The positive sequence and the parameter , and an arbitrary . Iterative Step: For given , compute |
As an above consequence, we note that
for all
. Furthermore, we know that
see ([
25], Corollary 2.4.5). Putting
where
, for all
, we obtain that
for all
. Notice that
for all
Thus, Algorithm 2 can be rewrite as
for all
. Since
, the convergence result therefore follows from Theorem 1 and can be stated as the following corollary.
Corollary 1. Let be a sequence generated by Algorithm 2. If the following control conditions hold:
- (i)
;
- (ii)
and ;
6. Numerical Experiments
In this section, to demonstrate the effectiveness of the fixed-point subgradient splitting method (Algorithm 1), we apply the proposed method to solve the fused lasso like problem. All the experiments were performed under MATLAB 9.6 (R2019a) running on a MacBook Pro 13-inch, 2019 with a 2.4 GHz Intel Core i5 processor and 8 GB 2133 MHz LPDDR3 memory.
For a given design matrix
in
where
and a response vector
. We consider the fused lasso like problem of the form
where
Observe that by setting the functions
,
,
, the Landweber operator
; the identity matrix; and the constrained set
, we obtain that the problem (
18) is a special case of Problem 1 so that the problem (
14) can be solved by Algorithm 1 (See
Section 5.1) for more details.
We generate the matrix
with normally distributed random chosen in
with a given percentage
of nonzero elements. We generate vectors
corresponding to
by the linear model
, where
and the vector
has
of nonzero components with normally distributed random generating. The initial point is a vector whose coordinates are chosen randomly in
. In the numerical experiment, denoting the estimate
for all
, we consider the behavior of the average of the relative changes
with the optimality tolerance
. We performed 10 independent tests for any collection of dimensions
and the percentages of nonzero elements of
for various step size parameters
. The results are showed in
Table 1, where the average number of iterations (#Iters) and average CPU time (Time) to reach the optimality tolerance for any collection of parameters are presented.
We see that the method performed with parameter behaves significantly better than other in the sense of the average number of iterations as well as the averaged CPU time for all dimensions and percentages of nonzero elements of . Moreover, in the case , we observed much bigger number of the averaged iterations and CPU time for the choice of the combinations of parameters and with all the problem sizes.