Reblogging an insightful blog post on CLT and WLLN by one of our members!
ANNOUNCEMENT: NVIDIA Deep Learning Institute Workshop 2018 – Registrations open!
Attend the #NVDLI #deeplearning workshop hosted by NVIDIA and Department of Mechanical Engineering, Iowa State University on November 3rd, 2018 from 8AM to 5PM. Register now!
Robustifying ML series – Schedule and Topics
Fall 2018 – Lecture series by Dr Chinmay Hegde and Dr Soumik Sarkar on Robustifying ML. Find the course plan here: Robustifying ML
(Note: Lectures 4 and 5 have been interchanged).
You can also find the lecture material in the Fall 2018 Talks tab.
Robustifying ML – Lecture series : Session 4 (Defenses for supervised DL models) – 09/14/2018
The fourth session in the Robustifying ML series was conducted by Dr. Sarkar at 12pm in Black 2004. The lecture notes can be found here: Defenses.
Robustifying ML – Lecture series : Session 3 (Attacks on RL models) – 09/07/2018
The third lecture in the Robustifying ML series was conducted by Dr. Sarkar in Black 2004, on the 7th of September, 2018. The slides for the same can be found here: Slides: Attacks on RL.
Robustifying ML – Lecture series : Session 2 (White box and black box attacks) – 08/31/2018
The second session for this lecture series was conducted in Black Engineering 2004, at 12pm on August 31. Lecture notes for the same can be found below:
Robustifying ML – Lecture series : Session 1 (Introduction) – 08/24/2018
The first lecture in this series was conducted in Coover 3043 on the 24th of August by Dr. Chinmay Hegde.
You can find the notes on the topics covered, here:
Weekly Seminar – 1/19/2018 and 1/26/2018 – Power of Gradient Descent
Invited talk by Dr. Chinmay Hegde of ECpE on:
————————————————-
“The power of gradient descent”
Many of the recent advances in machine learning can be attributed to two reasons: (i) more available data, and (ii) new and efficient optimization algorithms. Curiously, the simplest primitive from numerical analysis — gradient descent — is at the forefront of these newer ML techniques, even though the functions being optimized are often extremely non-smooth and/or non-convex.
In this series of chalk talks, I will discuss some recent theoretical advances that may shed light onto why this is happening and how to properly approach design of new training techniques.
————————————————-
When?
– 12pm to 1pm, Friday, 19th and 26th January
Where?
– 2222, Coover Hall
————————————————
Lecture notes are available here.
Spring 18 Seminar #1
After a hiatus of about five months, we’re finally back in action this semester, with a series of exciting talks lined up! Ardhendu Tripathy, a PhD student with Dr. Aditya Ramamoorthy has volunteered to share his experience from his recent internship at MERL. Please find the details below:
—————————————
In the first few minutes I will describe my internship experience with MERL in Summer 2017, followed by a short talk about the work that was done. The basic subject of the internship was privacy-preserving release of datasets. A report about it can be found at https://arxiv.org/abs/1712.
In the talk, I will describe the problem framework and show a tradeoff between privacy and utility in a case of synthetic data. This tradeoff can be closely attained by using adversarial neural networks. Following that I will visualize the performance on a contrived privacy problem on the MNIST dataset.
Thanks and regards,
Ardhendu
Please find the presentation slides accompanying the talk here.
When?
12th January, Friday (tomorrow), 12pm-1pm.
Where?
2222, Coover Hall.
We’re also going to arrange for some refreshments! Join us!
NIPS 2017: Themes and Takeaways – via The Cognitive Vortex
A summary post on major themes and takeaways from NIPS 2017, by Gauri Jagatap: NIPS 2017: Themes and Takeaways (click on post title to open).