Lab Home | Phone | Search | ||||||||
|
||||||||
Regularisers in Semi-definite Programming Low-rank methods for semidefinite programming (SDP) have gained considerable popularity, especially in machine learning applications. Their analyses often assume the use of determinant-based regularisers, which are rarely implemented, due to the run-time cubic in the dimension in conventional implementations of the computation of their gradient. We extend the convergence analyses of low-rank methods to a wide class of regularisers. Further, we show that the gradient of a well-known regulariser can be computed in time linear in the dimension, which makes the regularisation practical. Our results are illustrated on the MAXCUT SDP relaxation. Joint work with Jakub Marecek (IBM), Yury Maximov (LANL), and Martin Takac (Lehigh) Host: Yury Maximov |