Matrix completion and compressed sensing

Addition of an application of these works (thanks to Noam):
Candes, Emmanuel J., et al. “Phase retrieval via matrix completion.” SIAM Journal on Imaging Sciences 6.1 (2013): 199-225.
http://arxiv.org/pdf/1109.0573.pdf

Also, the presentation Aditya surveyed is attached here: Wainwright_Winedale_060312.

The two previous mails:

A General Framework for dealing with Sparse Estimation with Structure
Last week, we looked at the problem of low-rank matrix recovery, which can be viewed as the matrix generalization of compressed sensing or sparse linear regression (i.e., vector problems). A common approach to solve such problems has been to optimize a Loss+Regularizer objective over the search space, where the Loss captures the effect of the data and the Regularizer encourages parameter structure. Recently, researchers have studied the driving force behind this L+R formulation, and have been able to extract its essential structural ingredients to build a general theory for regularizers and loss functions. Our aim will be to understand this general framework and how it naturally captures/generalizes structure in several kinds of sparse recovery problems.
Main paper:
S. Negahban, P. Ravikumar, M. J. Wainwright and B. Yu. (2012) A unified framework for high-dimensional analysis of M-estimators with decomposable regularizers. Statistical Science, 27(4): 538–557, December 2012.
[Totally] Optional: For those of you interested in applications to low-rank matrix completion/factorization that we saw last time, the theory in the above work has been used subsequently to derive sharp guarantees for noisy matrix completion problems. Some references for your benefit are-
A. Agarwal, S. Negahban, and M. J. Wainwright (2012). Noisy matrix decomposition via convex relaxation: Optimal rates in high dimensions. Annals of Statistics, 40(2):1171–1197.
S. Negahban and M. J. Wainwright (2012). Restricted strong convexity and weighted matrix completion: Optimal bounds with noise. Journal of Machine Learning Research, 13: 1665–1697, May 2012.
The previous meeting:
The first paper we will address is:
“Exact matrix completion via convex optimization”
EJ Candès, B Recht – Foundations of Computational mathematics, 2009 – Springer
http://link.springer.com/article/10.1007/s10208-009-9045-5