Compilation of topics since May

Sent September 9:
This week we will go over the paper:
It talks about primal-dual averaging, one of the methods mentioned in the review we covered which we didn’t quite work out.
Sent August 26:
Reminder – As we discussed last week, we will talk about the second part of the optimization lecture on Thursday.

Sent August 13:
Sent August 5:
I want to dedicate a few sessions to the area of optimization in ML. The idea is to cover new results but also try to make a “map” of the area, and make the connections between the fields.
To bring us all up to some level, this week, instead of  reading a paper on a specific algorithmic/theoretical result, I thought we should read a review on the subject. I couldn’t find a good one written but found a nice nips tutorial on the subject (so you don’t even have to read:)).
Sent July 28:
from ICML 2013.
Sent July 14:
This week (Thursday @14:30) we will continue with Gaussian Processes. The subject will be the paper from ICML2011: http://www.icml-2011.org/papers/323_icmlpaper.pdf,  which applies GPs to Reinforcement Learning.
Also, here a motivational video (learning this task previously demanded hundreds of trials, this algorithm does it in 7):
Sent July 3:
It was suggested that we do a couple of sessions on Gaussian Processes. For next week, please read Chapters 2 and 5 of the book Gaussian Processes for Machine Learning, available at http://www.gaussianprocess.org/gpml/.
Sent June 16:
This week we will go over the paper:
“A Provably Efficient Algorithm for Training Deep Networks” http://arxiv.org/abs/1304.7045

Sent June 9:

Odalric will lead the discussion on the paper –

“Follow the Leader If You Can, Hedge If You Must”
http://arxiv.org/pdf/1301.0534v2.pdf, by Steven de RooijTim van ErvenPeter D. GrünwaldWouter M. Koolen.
This paper considers the online learning setting and tries to find a way to optimally
tune the Hedge algorithm so as to get an (really) adaptive algorithm.

A quick reference to the Hedge Algorithm: http://onlineprediction.net/n=Main.HedgeAlgorithm

Some motivation why this setting is useful, can be found in http://hal.archives-ouvertes.fr/docs/00/71/51/77/PDF/Devaine-Goude-Stoltz-Gaillard.pdf).

Sent April 29:

This week we’ll be reading the paper  “Sparse inverse covariance estimation with the graphical lasso”   by Friedman et al. (http://www-stat.stanford.edu/~tibs/ftp/graph.pdf). The paper discusses the problem of estimating sparse graphs by a lasso penalty applied to the inverse covariance matrix. The connection to  graphs- conditional independence may be deduced when there is a zero in the inverse covariance matrix; for a reminder on the subject see the attached tutorial.