Showing posts with label phaserecovery. Show all posts
Showing posts with label phaserecovery. Show all posts

Thursday, June 13, 2019

Inverse Scattering via Transmission Matrices: Broadband Illumination and Fast Phase Retrieval Algorithms

** Nuit Blanche is now on Twitter: @NuitBlog **

Here is an enduring example of the Donoho-Tanner phase transition, the ability to differentiate out certain reconstruction algorithms (I used to use an earlier example to talk about the peer review process).



When a narrowband coherent wavefront passes through or reflects off of a scattering medium, the input and output relationship of the incident field is linear and so can be described by a transmission matrix (TM). If the TM for a given scattering medium is known, one can computationally “invert” the scattering process and image through the medium. In this work, we investigate the effect of broadband illumination, i.e., what happens when the wavefront is only partially coherent? Can one still measure a TM and “invert” the scattering? To accomplish this task, we measure TMs using the double phase retrieval technique, a method which uses phase retrieval algorithms to avoid difficult-to-capture interferometric measurements. Generally, using the double phase retrieval method requires performing massive amounts of computation. We alleviate this burden by developing a fast, GPU-accelerated algorithm, prVAMP, which lets us reconstruct 2562642 TMs in under five hours. After reconstructing several TMs using this method, we find that, as expected, reducing the coherence of the illumination significantly restricts our ability to invert the scattering process. Moreover, we find that past a certain bandwidth an incoherent, an intensity-based scattering model better describes the scattering process and is easier to invert.
 Implementations: 

Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/> About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

Friday, July 13, 2018

Phase Retrieval Under a Generative Prior


Vlad just sent me the following: 
Hi Igor,

I am writing regarding a paper you may find of interest, co-authored with Paul Hand and Oscar Leong. It applies a deep generative prior to phase retrieval, with surprisingly good results! We can show recovery occurs at optimal sample complexity for gaussian measurements, which in a sense resolves the sparse phase retrieval O(k^2 log n) bottleneck.

https://arxiv.org/pdf/1807.04261.pdf


Best,

-Vlad

Thanks Vlad ! Here is the paper:

Phase Retrieval Under a Generative Prior by Paul Hand, Oscar Leong, Vladislav Voroninski
The phase retrieval problem asks to recover a natural signal y0Rn from m quadratic observations, where m is to be minimized. As is common in many imaging problems, natural signals are considered sparse with respect to a known basis, and the generic sparsity prior is enforced via 1 regularization. While successful in the realm of linear inverse problems, such 1 methods have encountered possibly fundamental limitations, as no computationally efficient algorithm for phase retrieval of a k-sparse signal has been proven to succeed with fewer than O(k2logn) generic measurements, exceeding the theoretical optimum of O(klogn). In this paper, we propose a novel framework for phase retrieval by 1) modeling natural signals as being in the range of a deep generative neural network G:RkRn and 2) enforcing this prior directly by optimizing an empirical risk objective over the domain of the generator. Our formulation has provably favorable global geometry for gradient methods, as soon as m=O(kd2logn), where d is the depth of the network. Specifically, when suitable deterministic conditions on the generator and measurement matrix are met, we construct a descent direction for any point outside of a small neighborhood around the unique global minimizer and its negative multiple, and show that such conditions hold with high probability under Gaussian ensembles of multilayer fully-connected generator networks and measurement matrices. This formulation for structured phase retrieval thus has two advantages over sparsity based methods: 1) deep generative priors can more tightly represent natural signals and 2) information theoretically optimal sample complexity. We corroborate these results with experiments showing that exploiting generative models in phase retrieval tasks outperforms sparse phase retrieval methods.



Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, April 19, 2017

Phase Transitions of Spectral Initialization for High-Dimensional Nonconvex Estimation


  [Personal message: I will be at ICLR next week, let's grab some coffee if you are there]
Yue just sent me the following:
Dear Igor,

I hope all is well.

We recently posted a paper on arXiv on analyzing the exact asymptotic performance of a popular spectral initialization method for various nonconvex signal estimation problems (such as phase retrieval). We think you and readers of your blog might be interested in this research.

The paper can be found here:

https://arxiv.org/abs/1702.06435

Best regards,
Yue
Thanks Yue, two phase transitions ! I like it: Phase Transitions of Spectral Initialization for High-Dimensional Nonconvex Estimation by Yue M. Lu, Gen Li
We study a spectral initialization method that serves a key role in recent work on estimating signals in nonconvex settings. Previous analysis of this method focuses on the phase retrieval problem and provides only performance bounds. In this paper, we consider arbitrary generalized linear sensing models and present a precise asymptotic characterization of the performance of the method in the high-dimensional limit. Our analysis also reveals a phase transition phenomenon that depends on the ratio between the number of samples and the signal dimension. When the ratio is below a minimum threshold, the estimates given by the spectral method are no better than random guesses drawn from a uniform distribution on the hypersphere, thus carrying no information; above a maximum threshold, the estimates become increasingly aligned with the target signal. The computational complexity of the method, as measured by the spectral gap, is also markedly different in the two phases. Worked examples and numerical results are provided to illustrate and verify the analytical predictions. In particular, simulations show that our asymptotic formulas provide accurate predictions for the actual performance of the spectral method even at moderate signal dimensions.



Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, February 02, 2017

Robust Phase Retrieval with the Swept Approximate Message Passing (prSAMP) Algorithm - implementation -

Laurent tells me that he and his co-authors have made the prSAMP implementation available in the IPOL platform/journal.



In phase retrieval, the goal is to recover a complex signal from the magnitude of its linear measurements. While many well-known algorithms guarantee deterministic recovery of the unknown signal using i.i.d. random measurement matrices, they suffer serious convergence issues for some ill-conditioned measurement matrices. As an example, this happens in optical imagers using binary intensity-only spatial light modulators to shape the input wavefront. The problem of ill-conditioned measurement matrices has also been a topic of interest for compressed sensing researchers during the past decade. In this paper, using recent advances in generic compressed sensing, we propose a new phase retrieval algorithm that well-behaves for a large class of measurement matrices, including Gaussian and Bernoulli binary i.i.d. random matrices, using both sparse and dense input signals. This algorithm is also robust to the strong noise levels found in some imaging applications.




Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, January 04, 2017

PhaseMax: Convex Phase Retrieval via Basis Pursuit - implementation -




Christoph just sent me the following:

 Hey Igor,

Happy New Year!

Tom Goldstein and I created a website for our recent convex phase retrieval method called PhaseMax that avoids lifting (unlike PhaseLift or PhaseCut): 

The website contains links to our arXiv paper and to four other papers that are related to PhaseMax. There is also example code that implements PhaseMax using our solver FASTA.

It would be great if you could feature this on your blog.

Thanks a lot!
Christoph
Christoph Studer
Assistant Professor
School of ECE, Rhodes Hall 331
Cornell University
Ithaca, NY 14853, USA
Thanks Christoph ! Here is the paper: PhaseMax: Convex Phase Retrieval via Basis Pursuit by Tom Goldstein, Christoph Studer

We consider the recovery of a (real- or complex-valued) signal from magnitude-only measurements, known as phase retrieval. We formulate phase retrieval as a convex optimization problem, which we call PhaseMax. Unlike other convex methods that use semidefinite relaxation and lift the phase retrieval problem to a higher dimension, PhaseMax operates in the original signal dimension. We show that the dual problem to PhaseMax is Basis Pursuit, which implies that phase retrieval can be performed using algorithms initially designed for sparse signal recovery. We develop sharp lower bounds on the success probability of PhaseMax for a broad range of random measurement ensembles, and we analyze the impact of measurement noise on the solution accuracy. We use numerical results to demonstrate the accuracy of our recovery guarantees, and we showcase the efficacy and limits of PhaseMax in practice.


Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, November 21, 2016

An Elementary Proof of Convex Phase Retrieval in the Natural Parameter Space via the Linear Program PhaseMax / Compressed Sensing from Phaseless Gaussian Measurements via Linear Programming in the Natural Parameter Space

 
Vlad just sent me the following:

Hi Igor,


I am writing regarding two new results which you may find of interest, they are joint work with Paul Hand.

The first is an elementary proof that PhaseMax, a linear program in the natural parameter space, achieves noiseless phase retrieval when coupled with the standard spectral initializer. This is in stark contrast to highly technical approaches by Candes et al. and Wright et al. using non-convex programs with the same goals, and simpler yet than the geometric probability/VC dimension arguments by the originators of PhaseMax:


The second, is a result of a new kind which connects faithfully compressed sensing and phase retrieval in the natural parameter space. Specifically, we establish that given an anchor vector that correlates with a sparse vector x_0 and O(k log n) iid gaussian magnitude measurements about x_0, a linear program in the natural parameter space called SparsePhaseMax recovers x_0 w.h.p, provided that x_0 is sufficiently sparse. This result also establishes a curious connection between phase retrieval and 1-bit compressed sensing.




Best,

-Vlad

Vladislav Voroninski
http://www.mit.edu/~vvlad/ 
 
Thanks Vlad ! Here are the two papers:

An Elementary Proof of Convex Phase Retrieval in the Natural Parameter Space via the Linear Program PhaseMax by Paul Hand, Vladislav Voroninski

The phase retrieval problem has garnered significant attention since the development of the PhaseLift algorithm, which is a convex program that operates in a lifted space of matrices. Because of the substantial computational cost due to lifting, many approaches to phase retrieval have been developed, including non-convex optimization algorithms which operate in the natural parameter space, such as Wirtinger Flow. Very recently, a convex formulation called PhaseMax has been discovered, and it has been proven to achieve phase retrieval via linear programming in the natural parameter space under optimal sample complexity. The current proofs of PhaseMax rely on statistical learning theory or geometric probability theory. Here, we present a short and elementary proof that PhaseMax exactly recovers real-valued vectors from random measurements under optimal sample complexity. Our proof only relies on standard probabilistic concentration and covering arguments, yielding a simpler and more direct proof than those that require statistical learning theory, geometric probability or the highly technical arguments for Wirtinger Flow-like approaches.



Compressed Sensing from Phaseless Gaussian Measurements via Linear Programming in the Natural Parameter Space by Paul Hand, Vladislav Voroninski
We consider faithfully combining phase retrieval with classical compressed sensing. Inspired by the recent novel formulation for phase retrieval called PhaseMax, we present and analyze SparsePhaseMax, a linear program for phaseless compressed sensing in the natural parameter space. We establish that when provide d with an initialization that correlates with a k-sparse n-vector, SparsePhaseMax recovers this vector up to global sign with high probability from O(klognk) magnitude measurements against i.i.d. Gaussian random vectors. Our proof of this fact exploits a curious newfound connection between phaseless and 1-bit compressed sensing. This is the first result to establish bootstrapped compressed sensing from phaseless Gaussian measurements under optimal sample complexity. 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, November 17, 2016

Single-view phase retrieval of an extended sample by exploiting edge detection and sparsity


Ashish just sent me the following:

Hi Igor,
awesome blog you run.

I've got a recent paper on using compressive sensing (L0/L1 regularization) to make phase retrieval more robust. Paper is here:


Just thought I'd bring it to your attention and maybe someone on the blog might be interested.


Thanks Ashish ! This problem in eq (7) very much looks like a co-sparsity one. Here is the paper: Single-view phase retrieval of an extended sample by exploiting edge detection and sparsity by Ashish Tripathi, Ian McNulty, Todd Munson, and Stefan M. Wild
We propose a new approach to robustly retrieve the exit wave of an extended sample from its coherent diffraction pattern by exploiting sparsity of the sample’s edges. This approach enables imaging of an extended sample with a single view, without ptychography. We introduce nonlinear optimization methods that promote sparsity, and we derive update rules to robustly recover the sample’s exit wave. We test these methods on simulated samples by varying the sparsity of the edge-detected representation of the exit wave. Our tests illustrate the strengths and limitations of the proposed method in imaging extended samples. 

 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, May 31, 2016

Phase Retrieval: Robust phase retrieval with the swept approximate message passing (prSAMP) algorithm / On The 2D Phase Retrieval Problem

Today, we have two preprints on phase retrieval. The first one investigates the use of an AMP solver with a binary measurmeent matrix while the second looks at how a 2D phase retrieval problem can be solved with a analytical approach when it is unique (with no sparsity assumption).



Robust phase retrieval with the swept approximate message passing (prSAMP) algorithm by Boshra Rajaei, Sylvain Gigan, Florent Krzakala, Laurent Daudet

In phase retrieval, the goal is to recover a complex signal from the magnitude of its linear measurements. While many well-known algorithms guarantee deterministic recovery of the unknown signal using i.i.d. random measurement matrices, they suffer serious convergence issues some ill-conditioned matrices. As an example, this happens in optical imagers using binary intensity-only spatial light modulators to shape the input wavefront. The problem of ill-conditioned measurement matrices has also been a topic of interest for compressed sensing researchers during the past decade. In this paper, using recent advances in generic compressed sensing, we propose a new phase retrieval algorithm that well-adopts for both Gaussian i.i.d. and binary matrices using both sparse and dense input signals. This algorithm is also robust to the strong noise levels found in some imaging applications.


On The 2D Phase Retrieval Problem by Dani Kogan, Yonina C. Eldar, Dan Oron

The recovery of a signal from the magnitude of its Fourier transform, also known as phase retrieval, is of fundamental importance in many scientific fields. It is well known that due to the loss of Fourier phase the problem in 1D is ill-posed. Without further constraints, there is no unique solution to the problem. In contrast, uniqueness up to trivial ambiguities very often exists in higher dimensions, with mild constraints on the input. In this paper we focus on the 2D phase retrieval problem and provide insight into this uniqueness property by exploring the connection between the 2D and 1D formulations. In particular, we show that 2D phase retrieval can be cast as a 1D problem with additional constraints, which limit the solution space. We then prove that only one additional constraint is sufficient to reduce the many feasible solutions in the 1D setting to a unique solution for almost all signals. These results allow to obtain an analytical approach (with combinatorial complexity) to solve the 2D phase retrieval problem when it is unique.

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, October 30, 2015

Phase Retrieval: An Overview of Recent Developments

It's nice to take a step back sometimes:


The problem of phase retrieval is a classic one in optics and arises when one is interested in recovering an unknown signal from the magnitude (intensity) of its Fourier transform. While there have existed quite a few approaches to phase retrieval, recent developments in compressed sensing and convex optimization-based signal recovery have inspired a host of new ones. This work presents an overview of these approaches.
Since phase retrieval, by its very nature, is ill-posed, to make the problem meaningful one needs to either assume prior structure on the signal (e.g., sparsity) or obtain additional measurements (e.g., masks, structured illuminations). For both the cases, we review conditions for the identifiability of the signal, as well as practical algorithms for signal recovery. In particular, we demonstrate that it is possible to robustly and efficiently identify an unknown signal solely from phaseless Fourier measurements, a fact with potentially far-reaching implications.
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, October 12, 2015

Intensity-only optical compressive imaging using a multiply scattering material : a double phase retrieval system

Here is some advances on this initial concept of ours on Imaging with Nature. The subject was furthered here. Without further ado:

Intensity-only optical compressive imaging using a multiply scattering material : a double phase retrieval system by Boshra Rajaei, Eric W. Tramel, Sylvain Gigan, Florent Krzakala, Laurent Daudet
In this paper, the problem of compressive imaging is addressed using natural randomization by means of a multiply scattering medium. To utilize the medium in this way, its corresponding transmission matrix must be estimated. To calibrate the imager, we use a digital micromirror device (DMD) as a simple, cheap, and high-resolution binary intensity modulator. We propose a phase retrieval algorithm which is well adapted to intensity-only measurements on the camera, and to the input binary intensity patterns, both to estimate the complex transmission matrix as well as image reconstruction. We demonstrate promising experimental results for the proposed algorithm using the MNIST dataset of handwritten digits as example images.
Related:
 
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, August 18, 2015

Low-Rank Spectral Optimization - implementation -

Michael just let me know of his recent work and attendant implementation. Thanks Michael !.




Various applications in signal processing and machine learning give rise to highly structured spectral optimization problems characterized by low-rank solutions. Two important examples that motivate this work are optimization problems from phase retrieval and from blind deconvolution, which are designed to yield rank-1 solutions. An algorithm is described based on solving a certain constrained eigenvalue optimization problem that corresponds to the gauge dual. Numerical examples on a range of problems illustrate the e ffectiveness of the approach.
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, May 20, 2015

Solving Random Quadratic Systems of Equations Is Nearly as Easy as Solving Linear Systems - implementation -

So phase retrieval can actually be fast and with near sample complexity ! Wow !
 

We consider the fundamental problem of solving quadratic systems of equations in variables, where , and is unknown. We propose a novel method, which starting with an initial guess computed by means of a spectral method, proceeds by minimizing a nonconvex functional as in the Wirtinger flow approach. There are several key distinguishing features, most notably, a distinct objective functional and novel update rules, which operate in an adaptive fashion and drop terms bearing too much influence on the search direction. These careful selection rules provide a tighter initial guess, better descent directions, and thus enhanced practical performance. On the theoretical side, we prove that for certain unstructured models of quadratic systems, our algorithms return the correct solution in linear time, i.e. in time proportional to reading the data and as soon as the ratio between the number of equations and unknowns exceeds a fixed numerical constant. We extend the theory to deal with noisy systems in which we only have and prove that our algorithms achieve a statistical accuracy, which is nearly un-improvable. We complement our theoretical study with numerical examples showing that solving random quadratic systems is both computationally and statistically not much harder than solving linear systems of the same size — hence the title of this paper. For instance, we demonstrate empirically that the computational cost of our algorithm is about four times that of solving a least-squares problem of the same size.
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Printfriendly