Nothing Special   »   [go: up one dir, main page]

Towards Sample-Optimal Methods For Solving Random Quadratic Equations With Structure

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

2018 IEEE International Symposium on Information Theory (ISIT)

Towards Sample-Optimal Methods for Solving


Random Quadratic Equations with Structure
Gauri Jagatap and Chinmay Hegde
Electrical and Computer Engineering Department
Iowa State University, Ames, IA, USA 50011

Abstract—We consider the problem of estimating a struc- m > 2n − 1 Gaussian measurements [6], and in the case of
tured high-dimensional parameter vector using random Gaussian high dimension n, the sample complexity m can be very large.
quadratic samples. This problem is a generalization of the Similarly, polynomial neural networks enable applications
classical problem of phase retrieval and impacts numerous prob-
lems in computational imaging. We provide a generic algorithm such as classification, where the activation functions are
based on alternating minimization that, if properly initialized, quadratic mappings [7]. The problem in (1) then corresponds
achieves information-theoretically optimal sample complexity. In to learning weights of a single neuron, x∗ , for m Gaussian
essence, we show that solving a system of random quadratic distributed training examples ai , and yi being the corresponding
equations with structural constraints is (nearly) as easy as output labels. The task is to design efficient algorithms for
solving the corresponding linear system with the same constraints,
if a proper initial guess of the solution is available. As an learning weights using fewer training samples.
immediate consequence, our approach improves upon the best To reduce sample complexity of such problems, several
known existing sample complexity results for phase retrieval works introduce sparsity assumptions. Sparsity has been used
(structured or otherwise). We support our theory via several to great advantage in compressive sensing and streaming
numerical experiments. algorithms [8], [9], and establish an information theoretically
A full version of this paper is accessible at: https:// optimal [10] requirement of O s log n  samples for stable
s
gaurijagatap.github.io/ assets/ ISIT18.pdf recovery of x∗ from linear measurements. Sparsity assump-
tions for inverting quadratic (or magnitude-only) equations
I. I NTRODUCTION of the form (1) has similarly helped lower computational
Motivation: Our focus in this paper is the following con- and memory requirements [5], [11]–[13]. Specifically, several
strained estimation problem. An unknown vector of parameters, papers consider the problem in (1), where Ms represents
∗ n
x ∈ R , is observed (or measured) to yield observations all s-sparse vectors x∗ , with the assumption of a Gaussian
y ∈ Rm of the form: observation framework [14]–[16]. In previous work [16], [17],
p
we have proposed a sparse phase retrieval algorithm called
yi = |hai , x∗ i| , i = [m], s.t. x∗ ∈ Ms (1) CoPRAM, which is linearly convergent and improves upon
where Ms ⊂ Rn is a model set that reflects the structural all other algorithms,  obtaining a Gaussian sample complexity
2

constraints on x . We adopt the familiar setting of under- of O s log n in general ( O (s log n) if power-law decaying
determined Gaussian observations, A = [a1 . . . ai . . . am ] ∈ > sparse signals are considered [17]).
R m×n
with m < n. The task is to recover an estimate of A natural extension of sparsity is the notion of structured

x from either absolute-value (p = 1) or quadratic (p = 2) sparsity. Several works in compressed sensing and statistical
measurements y . 1 learning have utilized various structures such as blocks, clusters,
An important application of the aforementioned setup is the and trees [18], [19]. Block structures in the context of sparse
classical signal processing problem of phase retrieval. Here, phase retrieval have been studied in [16]. Tree structures are
the measurements correspond to the magnitudes of complex popularly found in applications where sparsity is considered in
2D Fourier (or Short Time Fourier) transform coefficients. wavelet basis [20]. While the impact of structured sparsity has
The sensing apparatus is incapable of detecting phase of the been studied thoroughly for the case of linear measurements
complex light-field reflected or transmitted from the illuminated [21], recovery from quadratic or magnitude-only measurements
object source. This necessitates a phase recovery strategy, and is relatively less understood.
proposed solution approaches been explored dating back to the Our contributions: In this paper, we propose a new algo-
1970s via several works [1], [2]. Recent, renewed interest by rithmic framework called Model-based CoPRAM to solve the
the statistical learning community in this problem has focused problem of phase retrieval of signals with underlying structured
on the case of Gaussian observations, and have spawned several sparsity. Our framework is fairly generic and succeeds for
algorithms which are efficient as well as asymptotically (near) parameters belonging to any structured sparsity model (defined
optimal [3]–[6]. However, even in the best case, one requires formally below). Moreover, we provably show that if the
algorithm is properly initialized, then its sample complexity is
1 This work was supported in part by NSF grants CCF-1566281 and CCF- information-theoretically optimal. In particular, we analyze a
1750920, and a Black and Veatch Faculty Fellowship. special instantiation of our framework, called Tree CoPRAM,

978-1-5386-4780-6/18/$31.00©2018 IEEE 2296


Authorized licensed use limited to: Amrita School of Engineering. Downloaded on September 09,2023 at 15:27:41 UTC from IEEE Xplore. Restrictions apply.
2018 IEEE International Symposium on Information Theory (ISIT)

√ ãij are distributed


This holds trivially, if the entries of Ã,
according to normal distribution N (0, 1/ m). Model CoSaMP
solves the following minimization approximately:
2
min ỹ − Ãx̃ . (2)
x̃∈Ms 2
Fig. 1: A binary wavelet tree for a one-dimensional signal. The Model-based CoSaMP (also referred as ModelCoSaMP), uti-
squares denote the large wavelet coefficients that arise from lizes a model-based approximation stage (referred to as Mod-
the discontinuities in the piecewise smooth signal drawn below. elApprox). A specific instantiation of Model-based CoSaMP,
Figure taken from [22]. Tree CoSaMP employs an exact or approximate tree projection
which is applicable in the case of rooted s-sparse tree structures subroutine called TreeApprox [22], +[24], to ensure that the
for x∗ , and demonstrate the superior performance of our method output of the minimization in (2), x̃ , belongs to the model
both in theory and numerical simulations. In essence, our Ms . This approach is largely parameter free and only requires
contributions imply that solving a system of under-determined knowledge of signal sparsity s and assumption of tree structure.
quadratic equations under structural constraints is (essentially) C. Phaseless signal recovery
as easy as solving the corresponding linear system under the
same constraints, provided a good initial guess is available. The recovery problem can be expressed by constructing a
Techniques: The algorithmic techniques used in this pa- loss function of the form:
m
per are a combination of two focal points: (i) alternating min
X p 2
(yi − |hai , xi| ) . (3)
minimization based sparse signal recovery from phaseless x∈M s
i=1
measurements (via our previous work on CoPRAM [16]),
where p = 1 or p = 2. Gradient descent based approaches
(ii) using a structured-sparsity promoting subroutine called
popularly use the Wirtinger Flow (which solves the quadratic
ModelApprox (e.g. TreeApprox [22] in the context of tree-
variant, p = 2) [4], [14], [23], [25], [26] and Amplitude
structured sparsity), which replaces the standard s-sparse
Flow (which solves the magnitude-only variant, p = 1) [6],
projection rule used to enforce sparsity. Additionally, we also
[15], [27] approaches, to calculate the explicit gradient of
design a novel initialization heuristic, which yields an initial
the objective function in (3) composed of either squared
estimate x0 that is very close to x∗ in practice. Our main
or magnitude-only measurements. In this paper, we use the
theoretical contribution is a generalization of a recent result of
alternating minimization approach [2], with magnitude-only
[23] to the case where x∗ belongs to a known model set Ms .
measurements (p = 1), by introducing a new variable to
II. BACKGROUND represent the missing phase information, hence linearizing
the problem. We then update the phase variable and signal
A. Structured sparsity
variable in an alternating fashion. In the signal estimation
We provide some background for the problem formulation in stage, we employ the Model-based CoSaMP algorithm to
(1). A vector x∗ ∈ Rn is said to be s-sparse if it has no more obtain a structurally sparse vector estimate. To evaluate the
than s non-zero entries. We use S := {j|x∗j 6= 0} to indicate distance of the x-estimate from x∗ , we introduce the expression
the true support of x∗ , such that |S| ≤ s. The model notation dist (x1 , x2 ) := min(kx1 − x2 k2 , kx1 + x2 k ) for every
2
Ms is introduced as an indicator set comprising of all vectors x1 , x2 ∈ Rn . This method is discussed in further detail in
which follow a given structural constraint, underscored with Section III-A.
parameter s. Let Ms denote the set of all allowable supports
{S1 . . . Si . . . SN }, such that Si ⊆ [n] and |Si | ≤ s, then D. Spectral initialization
Ms = {x ∈ Rn | supp(x) ∈ Ms }. As a special case, Ms can Non-convex approaches for phase retrieval [4], [5] rely on
be a model representing all s-sparse rooted tree supports, as a spectral initialization technique to ensure that the initial
illustrated in Figure 1. estimate x0 is within a δ0 -ball radius of the true solution x∗ .
This is required to establish subsequent convergence of descent
B. Model-based CoSaMP
based algorithms. For this, one can construct an estimator
Model-based CoSaMP (Compressive Sensing using Matching matrix M = 1 Pm y 2 ai a> , and use the top left-singular
m i=1 i i
Pursuit) [19] is a popular technique to recover structured (for vector of this matrix as an appropriate initialization. Sparse
example tree) s-sparse vectors or signals x̃∗ ∈ Rn from linear modifications of this strategy involve detecting (partial) support
observations ỹ ∈ Rm of the form: information from the diagonal of the estimator matrix M, by
∗ using an approximate projection onto model Ms . This method
ỹ = Ãx̃ ,
is discussed in further detail in Section III-B.
where à ∈ Rm×n and m < n. The sensing matrix à is
required to satisfy model-RIP [19], with constant δMs , such III. A LGORITHM
that for all x̃ ∈ Ms , the following holds: In this section, we propose a new algorithm for solving the
2
2 2 tree sparse phase retrieval problem and analyze its performance.
(1 − δMs ) kx̃k2 ≤ Ãx̃ ≤ (1 + δMs ) kx̃k2 .
2 We use a spectral initialization, followed by an alternating

2297
Authorized licensed use limited to: Amrita School of Engineering. Downloaded on September 09,2023 at 15:27:41 UTC from IEEE Xplore. Restrictions apply.
2018 IEEE International Symposium on Information Theory (ISIT)

Algorithm 1 Model-based CoPRAM Note that the problem above is not convex, because p ∈ P is
Input: A, y, s, t0 a set of all vectors with entries constrained to be in {−1, 1}.
Instead, we alternate between estimating p and x. We perform
1
Pm 2
1: Compute signal power: φ2 = m Pi=1 yi . two estimation steps: (i) if we fix the signal estimate x, then
1 m
2: Compute: diag(M) := Mjj = m 2 2
i=1 yi aij for j = [n].
the minimizer p ∈ P is given in closed form as:
3: Set: Ŝ ← M ODEL A PPROX(diag(M)).
Pm 2 p = sign (Ax) , (5)
1 T
4: Set: v1 ← top s.v. of MŜ = m i=1 yi ai Ŝ ai Ŝ .
(phase estimation, Line 8 of Algorithm 1);
5: Compute: v ∈ Rn ← v1 for Ŝ, and 0 for Ŝ c .
(ii) if we fix the phase vector p, the signal vector x ∈ Ms is
6: Compute: x0 ← φv .
obtained by solving a (structured) sparse recovery problem,
for t = 0, · · · , t0 − 1 do 1
7: min √ kAx − p ◦ yk2 , (6)
8: pt+1 ← sign (Axt ), x∈Ms m
9: xt+1 = M ODEL C O S A MP( √1m A, √1m pt+1 ◦ y,s,xt ). if m < n and √Am satisfies the model-RIP as defined above.
10: end for (signal estimation, Line 9 of Algorithm 1).
Here, we employ the Model-based CoSaMP [19] algorithm
Output: xt0 ← xt . to (approximately) solve (6). Note that since (6) itself is a non-
convex problem, exact minimization can be hard. However, in
minimization based descent approach, similar to our previous each signal estimation step, we do not need to explicitly obtain
work in [16]. Our algorithm is largely parameter-free except the minimizer; we can still show a sufficient descent criterion by
for knowledge of the underlying sparsity s. Moreover, the unpackin the analysis of the Model-based CoSaMP algorithm.
theoretical analysis requires no extra assumptions on the For analysis reasons, we require that the entries of√the input
parameter vector, except that its support belongs to a structured sensing matrix are distributed according to N (0, 1/ m). This
sparsity model. We call our algorithm Model-based CoPRAM, can be achieved by scaling down the inputs √ to Model-based
which generalizes our previous algorithm called CoPRAM (or CoSaMP: At , pt+1 ◦ y by a factor of m. We also use a
Compressive Phase Retrieval with Alternating Minimization) “warm start" Model-based CoSaMP routine for the (t + 1)th
[16], [17]. The algorithm can be broken down into three types update of x, xt+1 , for each iteration where the initial guess of
of update stages: (i) initialization; (ii) phase estimation; and (iii) the solution to (6) is given by the current signal estimate xt .
signal estimation. The full algorithm is presented in pseudo- We now analyze our proposed descent scheme. We obtain
code form as Algorithm 1. The phase and signal estimation the following theoretical result:
stages are described in detail in the Section III-A. Due to the
simplicity of our algorithm, it can easily be extended to a Theorem III.1. Given an initialization x0 ∈ Ms satisfying
∗ ∗
0

general class of signals defined by any model Ms . In this dist x , x ≤ δ0 kx k2 , for 0 < δ0 < 1, if we have number
paper, we focus on the special case where the model Ms of Gaussian measurements,
corresponds to tree-sparse vectors in Rn . m > C (s + card(M4s )) ,
A. Convergence of Model-based CoPRAM then the iterates xt+1 of Algorithm 1, satisfy:
This part of the algorithm is described in Lines 7-10 of
dist xt+1 , x∗ ≤ ρ0 dist xt , x∗ ,
 
(7)
Algorithm 1. Once we obtain a good enough initial estimate
x0 ∈ Ms such that dist x0 , x∗ ≤ δ0 kx∗ k2 , we construct where xt , xt+1 , x∗ ∈ Ms , and 0 < ρ0 < 1 is a constant, with


a method to accurately estimate the true x∗ . To achieve this, probability greater than 1 − e−γm , for positive constant γ.
we adapt the alternating minimization approach from [5]. The
Proof sketch: Via an algebraic derivation similar to the one
observation model in (1) can be restated as follows:
provided in [17], the per-iteration error for the tth iteration
sign (hai , x∗ i) ◦ yi = hai , x∗ i , of Alg. 1, with L iterations of Model-based CoSaMP, can be
for all i = {1, 2, . . . m}. We denote the phase vector p ∈ Rm derived as:
as a vector that contains the unknown signs of the measure- (ρ1 ρ4 + ρ2 )
xt+1 − x∗ 2 ≤ (ρ1 ρ3 )L x∗ − xt 2 + kEph k2 ,
ments, i.e., pi = sign (hai , xi) for all i = {1, 2, . . . m}. Let (1 − ρ1 ρ3 )
p∗ denote the true phase vector and let P denote the set (8)
of all phase vectors, i.e. P = {p : pi = ±1, ∀i}. Then our where ρ1 , ρ2 , ρ3 , ρ4 are appropriate constants, and Eph is
measurement model gets modified as: the error in estimating phase in the tth run of Model-based

p ◦ y = Ax . ∗ CoPRAM. The second part of this proof requires a bound on
the phase error term kEph k2 , which can be analytically derived
The loss function in (3) gets modified and is composed of two as:
variables x and p, m
2 4 X > ∗ 2
min kAx − p ◦ yk2 (4) kE ph k = a x · 1{sign(ai xt ) sign(ai x∗ )=−1} .
x∈Ms ,p∈P
2 m i=1 i

2298
Authorized licensed use limited to: Amrita School of Engineering. Downloaded on September 09,2023 at 15:27:41 UTC from IEEE Xplore. Restrictions apply.
2018 IEEE International Symposium on Information Theory (ISIT)

We bound this term by invoking Lemma III.2. top left singular vector (s.v.) of M to construct a good initial
estimate x0 (Lines 4-6 of Algorithm 1).
Lemma III.2. As long as the initial estimate is a small distance To provide the intuition behind this strategy, we leverage the
away from the true signal x∗ ∈ Ms , dist x0 , x∗ ≤ δ0 kx∗ k2 ,

fact that the diagonal elements of the expectation matrix E [M]
and subsequently, dist (xt , x∗ ) ≤ δ0 kx∗ k2 , where xt is the are given by E [Mjj ] = kx∗ k2 + 2x∗2 j . The signal marginals
tth update of Algorithm 1, then the following bound holds, Mjj corresponding to j ∈ S are larger, in expectation, than
m
4 X > ∗ 2 2 those corresponding to j ∈ S c . Therefore the signal marginals
ai x · 1{(a> xt )(a> x∗ )≤0} ≤ ρ25 xt − x∗ 2 ,
m i i Mjj serve as a good indicator to extract an approximate
i=1
support Ŝ of x∗ . We additionally impose structure Ms to
with probability greater than 1 − e−γ2 m , where γ2 is a this sparse initial vector, by utilizing an approximate model
positive constant, as long as m > C (s + card(M4s )) and projection algorithm (such as tree projection [22]) (Line 3
ρ25 = 0.0256. of Algorithm 1). We demonstrate experimentally that this
We therefore achieve a per-step error reduction scheme of initialization strategy produces a good estimate of x∗ . We do
the form: not have a full theoretical characterization of the initialization
stage, and intend to pursue this in future work.
xt+1 − x∗ 2
≤ ρ0 xt − x∗ 2
,
IV. E XPERIMENTS
if the initial estimate x0 satisfies x0 − x∗ 2 ≤ δ0 kx∗ k2 , and
this result can be trivially extended to the case where the initial In this section, we present experimental results to demon-
estimate x0 satisfies x0 + x∗ 2 ≤ δ0 kx∗ k2 , hence giving strate the empirical advantages of Model-based CoPRAM
the convergence criterion of the form (for ρ0 < 1): (specifically, for the tree-sparsity model) over standard sparse
phase retrieval algorithms (specifically, CoPRAM [16] which
dist xt+1 , x∗ ≤ ρ0 dist xt , x∗ .
 
provides the best available empirical performance for the
The complete proof of Theorem III.1 and Lemma III.2 can standard case). We consider two different sizes (32 × 32 and
be found in Appendix A of the full paper [28]. We present a 64 × 64) of an image of Rice University’s Lovett Hall as
corollary of Theorem III.1 for tree sparse signals. shown in Figure 2. This image is modeled as sparse in the
Haar wavelet basis, with the number of levels of decomposition
Corollary III.3. As a consequence of Theorem III.1, if Ms is are chosen to be log2 n where n is the number of image pixels.
a model representing rooted tree sparse signals with sparsity s,
then Algorithm 1 is linearly convergent and requires a Gaussian
sample complexity ofm > Cs, as long as the initialization x0
satisfies dist x0 , x∗ ≤ δ0 kx∗ k2 .
The proof of this corollary can be found in Appendix A
of the full version of this paper [28]. Observe that m = O(s)
samples are necessary for reconstructing any s-sparse parameter
vector even in the linear case (where perfect phase information
is available), and therefore Theorem III.1 implies information-
Fig. 2: Image considered for simulations, resized to 32 × 32
theoretic optimality (up to constants) of our proposed approach.
and 64 × 64 pixels, considered to be sparse in Haar basis.

B. Initialization of Model-based CoPRAM The Tree CoPRAM and CoPRAM algorithms were run for
the following experimental settings: n = 1024 and n = 4096.
The first stage (Lines 1-6 of Algorithm 1) of Model-based The original image x̂ was sparsified by fixing s and picking
CoPRAM uses a spectral initialization approach, similar to that the top s- wavelet coefficients of x̂. This sparsified image is
provided in previous sparse phase retrieval methods [5], [14]– considered to be the s-sparse tree structured ground truth x∗ .
[16], [26]. We construct a biased estimator of the squared true Phase transitions: We demonstrate the superior perfor-
signal coefficients, which we call the signal marginal matrix: mance of the Tree CoPRAM algorithm in comparison to
m CoPRAM, through a series of phase transition graphs and
1 X 2
M= y ai a>
i . diagrams. In Figure 3, we illustrate two different settings of
m i=1 i
sparsities for n = 1024 dimensional x∗ : s = 10 and s = 31,
The j th signal coefficient
Pm 2can be estimated from the the diagonal and compare the performances of CoPRAM and Tree CoPRAM
1 2
term Mjj = m i=1 i ij and the set of all Mjj ’s can be
y a , by plotting the variation in the number of measurements m on
calculated in O (mn) time. The approximate support estimate the horizontal axis and the probability of successful recovery
Ŝ can be extracted by performing an exact or approximate tree (fraction of trials in which kxt0 − x∗ k2 / kx∗ k2 ≤ 0.05). It is
projection algorithm [22] on the n-dimensional diagonal of the clear that far fewer samples are required for successful recovery,
marginal matrix M. From this we obtain the sub-matrix MŜ , when Tree CoPRAM is used instead of CoPRAM.
whose rows and columns are projected onto Ŝ. This is followed Figure 4 shows phase transitions for image size n=4096,
by a spectral technique ( [5], [14]–[16]), which extracts the at different sparsities (s=10,20,31,41,51,61,72,82,92,102) and

2299
Authorized licensed use limited to: Amrita School of Engineering. Downloaded on September 09,2023 at 15:27:41 UTC from IEEE Xplore. Restrictions apply.
2018 IEEE International Symposium on Information Theory (ISIT)

1 [2] J. Fienup. Phase retrieval algorithms: a comparison. Applied optics,


21(15):2758–2769, 1982.
Probability of recovery

[3] E. Candes, T. Strohmer, and V. Voroninski. Phaselift: Exact and stable


signal recovery from magnitude measurements via convex programming.
CoPRAM s=10 Comm. Pure Appl. Math., 66(8):1241–1274, 2013.
Tree CoPRAM s=10 [4] E. Candes, X. Li, and M. Soltanolkotabi. Phase retrieval via wirtinger
0.5
CoPRAM s=31 flow: Theory and algorithms. IEEE Trans. Inform. Theory, 61(4):1985–
Tree CoPRAM s=31 2007, 2015.
[5] P. Netrapalli, P. Jain, and S. Sanghavi. Phase retrieval using alternating
minimization. In Adv. Neural Inf. Proc. Sys. (NIPS), pages 2796–2804,
0 2013.
[6] G. Wang, G. Giannakis, Y. Saad, and J. Chen. Solving most systems of
100 200 300 400 500 random quadratic equations. In Adv. Neural Inf. Proc. Sys. (NIPS), 2017.
Sparsity s [7] R. Livni, S. Shalev-Shwartz, and O. Shamir. On the computational
efficiency of training neural networks. In Adv. Neural Inf. Proc. Sys.
(NIPS), pages 855–863, 2014.
Fig. 3: Phase transitions for CoPRAM and Tree CoPRAM for [8] E. Candes, J. Romberg, and T. Tao. Robust uncertainty principles: Exact
sparsities s=10 and s=31 on an n=1024 dimensional signal. signal reconstruction from highly incomplete frequency information.
IEEE Trans. Inform. Theory, 52(2):489–509, 2006.
[9] D. Needell, J. Tropp, and R. Vershynin. Greedy signal recovery review.
In Proc. Asilomar Conf. Sig. Sys. Comput., pages 1048–1050. IEEE,
2008.
[10] K. Do Ba, P. Indyk, E. Price, and D. Woodruff. Lower bounds for sparse
recovery. In Proc. ACM Symp. Discrete Alg. (SODA), pages 1190–1197,
2010.
[11] M. Moravec, J. Romberg, and R. Baraniuk. Compressive phase retrieval.
In Opt. Eng.+Appl., pages 670120–670120. International Society for
Optics and Photonics, 2007.
[12] M. Iwen, A. Viswanathan, and Y. Wang. Robust sparse phase retrieval
made easy. Ap. Comp. Har. An., 42(1):135–142, 2017.
[13] H. Ohlsson, A. Yang, R. Dong, and S. Sastry. Cprl–an extension of
compressive sensing to the phase retrieval problem. In Adv. Neural Inf.
(a) (b) Proc. Sys. (NIPS), pages 1367–1375, 2012.
[14] T. Cai, X. Li, and Z. Ma. Optimal rates of convergence for noisy sparse
Fig. 4: Phase transition diagrams for (a) CoPRAM (b) Tree phase retrieval via thresholded wirtinger flow. Ann. Stat., 44(5):2221–
CoPRAM on signal of dimension n = 4096. 2251, 2016.
[15] G. Wang, L. Zhang, G. Giannakis, M. Akçakaya, and J. Chen. Sparse
number of measurements taking (approximately) uniform phase retrieval via truncated amplitude flow. IEEE Transactions on
Signal Processing, 66(2):479–491, 2018.
integer values between m=52 and m=512. It is clear that [16] G. Jagatap and C. Hegde. Fast, sample efficient algorithms for structured
the phase transition plot of Tree CoPRAM demonstrates better phase retrieval. In Adv. Neural Inf. Proc. Sys. (NIPS), pages 4924–4934,
sample complexity w.r.t. CoPRAM. 2017.
[17] G. Jagatap and C. Hegde. Sample efficient algorithms for recovering
Running time performance: In our final set of results, the structured signals from magnitude-only measurements. arXiv preprint
running time performance of Tree CoPRAM w.r.t CoPRAM is arXiv:1705.06412, 2017.
tabulated in Table I. [18] M. Yuan and Y. Lin. Model selection and estimation in regression with
grouped variables. J. Royal Stat. Soc. Stat. Meth., 68(1):49–67, 2006.
The simulations were run using MATLAB R2017b on a [19] R. Baraniuk, V. Cevher, M. Duarte, and C. Hegde. Model-based
desktop computer with Intel Xeon E5-2620 processor with compressive sensing. IEEE Trans. Info. Theory, 56(4):1982–2001, Apr.
12 CPUs at 2.4GHz and 64GB RAM. The comparative 2010.
[20] C. Cartis and A. Thompson. An exact tree projection algorithm for
performance of CoPRAM w.r.t. other sparse phase retrieval wavelets. IEEE Sig. Proc. Lett., 20(11):1026–1029, 2013.
algorithms has been discussed in Section 5 of [16]. [21] C. Hegde, P. Indyk, and L. Schmidt. Fast algorithms for structured
The simulations were run using MATLAB R2017b on a sparsity. Bulletin of the EATCS, 1(117):197–228, Oct. 2015.
[22] C. Hegde, P. Indyk, and L. Schmidt. A fast approximation algorithm for
desktop computer with Intel Xeon E5-2620 processor, 12 CPUs tree-sparse recovery. In Proc. IEEE Int. Symp. Inform. Theory (ISIT),
at 2.4GHz and 64GB RAM. pages 1842–1846, June 2014.
It may be noted that the comparative performance of [23] H. Zhang and Y. Liang. Reshaped wirtinger flow for solving quadratic
system of equations. In Adv. Neural Inf. Proc. Sys. (NIPS), pages 2622–
CoPRAM w.r.t. other sparse phase retrieval algorithms has 2630, 2016.
been discussed in detail in Section 5 of [16]. [24] C. Hegde, P. Indyk, and L. Schmidt. Nearly linear-time model-based
compressive sensing. In Proc. Intl. Colloquium on Automata, Languages,
TABLE I: Average running time in seconds of Tree CoPRAM and Programming (ICALP), pages 588–599, July 2014.
v/s CoPRAM for n=1024. [25] E. Candes, X. Li, and M. Soltanolkotabi. Phase retrieval from coded
diffraction patterns. Ap. Comp. Har. An., 39(2):277–299, 2015.
Algorithm s=10,m=308 s=20,m=410 s=20,m=512 [26] Y. Chen and E. Candes. Solving random quadratic systems of equations
CoPRAM 0.0241 0.0455 0.0433 is nearly as easy as solving linear systems. In Adv. Neural Inf. Proc.
Tree CoPRAM 0.0119 0.0336 0.0302 Sys. (NIPS), pages 739–747, 2015.
[27] G. Wang and G. Giannakis. Solving random systems of quadratic
equations via truncated generalized gradient flow. In Adv. Neural Inf.
Proc. Sys. (NIPS), pages 568–576, 2016.
R EFERENCES [28] G. Jagatap and C. Hegde. Towards sample-optimal methods for solving
[1] R. Gerchberg and W. Saxton. A practical algorithm for the determination random quadratic equations with structure, 2018. Full version: https:
of phase from image and diffraction plane pictures. Optik, 35(237), 1972. //gaurijagatap.github.io/assets/ISIT18.pdf.

2300
Authorized licensed use limited to: Amrita School of Engineering. Downloaded on September 09,2023 at 15:27:41 UTC from IEEE Xplore. Restrictions apply.

You might also like