Weekly Seminar – 1/19/2018 and 1/26/2018 – Power of Gradient Descent

Invited talk by Dr. Chinmay Hegde of ECpE on:

————————————————-
“The power of gradient descent”

Many of the recent advances in machine learning can be attributed to two reasons: (i) more available data, and (ii) new and efficient optimization algorithms. Curiously, the simplest primitive from numerical analysis — gradient descent — is at the forefront of these newer ML techniques, even though the functions being optimized are often extremely non-smooth and/or non-convex.

In this series of chalk talks, I will discuss some recent theoretical advances that may shed light onto why this is happening and how to properly approach design of new training techniques.
————————————————-

When?
– 12pm to 1pm, Friday, 19th and 26th January

Where?
– 2222, Coover Hall

————————————————

Lecture notes are available here.

Leave a Reply

Your email address will not be published. Required fields are marked *