Pegasos

This week we will discuss the paper of Shai Shalev-Shwartz et al:
Pegasos: Primal Estimated sub-GrAdient SOlver for SVM

The paper presents an iterative algorithm for solving the optimization problem of Support Vector Machines (SVM). The authors show that the number of iterations needed to converge to a solution with ε accuracy is O(1/ε).  In each iteration the algorithm optimizes on a single sample or a subset of samples (mini-batch setting). The flavor of the algorithm is of stochastic gradient descent (for the single sample case it is a stochastic sub-gradient decent algorithm). As a result, the convergence rate is not directly related to the number of samples. This property makes the algorithm attractive for large scale problems.

If you don’t have much time this week, there is also a shorter ICML version available . I recommend the longer version, it shows a slight generalization of this algorithm, and is generally less squeezed ;).

If you think you’ll be able to make it, please sign in.
Note that the discussion time slot has moved to Wednesday morning at 9:30.