Site menu:

picture by

OPT2020

We welcome you to participate in the 12th OPT Workshop on Optimization for Machine Learning. This year's OPT workshop will be run as a virtual event together with NeurIPS. This year we particularly encourage submissions in the area of Adaptive stochastic methods and generalization performance.


We are looking forward to an exciting OPT 2020!


All events online

Schedule

Time Speaker Title
6:00am-7:00amSocializing in gather.town. Login and say hello!


Session 1 (Moderator: Sebastian Stich, co-moderator Courtney Paquette)

6:50am-7:00amOrganizers Opening Remarks
7:00am-7:30amTong Zhang (HKUST) The Convexity of Learning Infinite-width Deep Neural Networks[abstract]
7:30am-8:00amVolkan Cevher (EPFL) Adaptation and universality in first-order methods[abstract]
8:00am-8:30amSpotlight Presentations
live Q&A
10-minute talks available (on-demand) on the website
Presenters will join the Zoom session for live Q&A
  • Laurent Condat, Distributed Proximal Splitting Algorithms with Rates and Acceleration
  • Zhize Li, PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization
  • Ohad Shamir, Can We Find Near-Approximately-Stationary Points of Nonsmooth Nonconvex Functions?
  • Tiffany Vlaar, Constraint-Based Regularization of Neural Networks
  • Mohammadi Zaki, Employing No Regret Learners for Pure Exploration in Linear Bandits
[papers]
8:30am-10:00amBreak and Poster Session Join for the poster session on gather.town.
Authors will present their work.
[posters]


Session 2 (Moderator: Martin Takáč, co-moderator Courtney Paquette)

9:50am-10:00amOrganizers Opening Remarks
10:00am-10:30amAndreas Krause (ETH Zurich) Adaptive Sampling for Stochastic Risk-Averse Learning[abstract]
10:30am-11:00amDonald Goldfarb (Columbia) Practical Kronecker-factored BFGS and L-BFGS methods for training deep neural networks[abstract]
11:00am-11:30amSpotlight Presentations
live Q&A
10-minute talks available (on-demand) on the website
Presenters will join the Zoom session for live Q&A
  • Samuel Horvath, Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization
  • Guan-Horng Liu, DDPNOpt: Differential Dynamic Programming Neural Optimizer
  • Nicolas Loizou, Stochastic Polyak Step-size for SGD: An Adaptive Learning Rate for Fast Convergence
  • Sharan Vaswani, Adaptive Gradient Methods Converge Faster with Over-Parameterization (and you can do a line-search)
  • Sharan Vaswani, How to make your optimizer generalize better
[papers]
11:30am-12:00pmBreak Coffee break in gather.town.
12:00pm-12:30pmSuvrit Sra (MIT) SGD without replacement: optimal rate analysis and more[abstract]
12:30am-14:00pmBreak and Poster Session Join for the poster session on gather.town.
Authors will present their work.
[posters]


Session 3 (Moderator: Mark Schmidt, co-moderator Martin Takáč)

13:50pm-14:00pmOrganizers Opening Remarks
14:00pm-14:30pmAshia Wilson (MSR) Stochastic Geodesic Optimization[abstract]
14:30pm-15:00pmRachel Ward (UT Austin) Concentration for matrix products, and convergence of Oja’s algorithm for streaming PCA[abstract]
15:00pm-15:30pmSpotlight Presentations
live Q&A
10-minute talks available (on-demand) on the website
Presenters will join the Zoom session for live Q&A
  • Zhan Gao, Incremental Greedy BFGS: An Incremental Quasi-Newton Method with Explicit Superlinear Rate
  • Wenjie Li, Variance Reduction on Adaptive Stochastic Mirror Descent
  • Preetum Nakkiran, Learning Rate Annealing Can Provably Help Generalization, Even for Convex Problems
  • Denny Wu, When Does Preconditioning Help or Hurt Generalization?
  • Chengrun Yang, TenIPS: Inverse Propensity Sampling for Tensor Completion
[papers]
15:30pm-16:30pmBreak Coffee break in gather.town.
16:30pm-17:00pmMichael Friedlander (UBC) Fast convergence of stochastic subgradient method under interpolation[abstract]
17:00pm-18:00pmBreak and Poster Session Join for the poster session on gather.town.
Authors will present their work.
[posters]


Session 4 (Moderator: Quanquan Gu, co-moderator Mark Schmidt)

17:50pm-18:00pmOrganizers Opening Remarks
18:00pm-18:30pmDeanna Needell (UCLA), co-speaker Hanbaek Lyu (UCLA) Online nonnegative matrix factorization for Markovian and other real data[abstract]
18:30pm-19:00pmSpotlight Presentations
live Q&A
10-minute talks available (on-demand) on the website
Presenters will join the Zoom session for live Q&A
  • Tolga Ergen, Convex Programs for Global Optimization of Convolutional Neural Networks in Polynomial-Time
  • Charles Guille-Escuret, A Study of Condition Numbers for First-Order Optimization
  • Lewis Liu, Affine-Invariant Analysis of Frank-Wolfe on Strongly Convex Sets
  • Sanae Lotfi, Stochastic Damped L-BFGS with controlled norm of the Hessian approximation
  • Dongruo Zhou, On the Convergence of Adaptive Gradient Methods for Nonconvex Optimization
[papers]
19:00pm-19:05pmOrganizers Closing Remarks