Weekly Seminar 1/13/17 + New developments

Welcome back, data science enthusiasts! We had an eventful seminar series last semester and have some new plans springing up for the new semester.

We’re kicking off our new batch of sessions with a talk by Binghui Wang from Dr. Neil Gong’s group, who will be presenting an introduction of Adversarial Machine Learning, a research filed that lies at the intersection of Machine Learning and Computer Security.

Date: Friday, 13th January, 2017
Time: 3pm to 4pm
Venue: 2222, Coover Hall

You can find the slides for the talk here: Slides

Some selected references:
We have a bunch of announcements regarding a new pattern of talks scheduled for this semester. We took a poll last month and decided to have 3-4 mini series of talks, each concentrating on a single topic. We will be discussing the speaker line-up as well as structuring for the same. Join us at 2222, Coover on Friday at 3pm to know more!
Also, beat the winter chill with some piping hot coffee that awaits you!

Weekly Seminar – 12/02/2016

Praneeth from Dr. Vaswani’s group will be the giving the final presentation for this semester for our data science reading group. Here are the details as forwarded by Praneeth for the following:
______________________________________________________

Abstract: I will motivate the problem of robust PCA through some examples, followed by a brief introduction to the existing approaches to solve this problem. I will describe the recently proposed Non-Convex algorithm in some detail. I also intend to go over the proof outline for demonstrating the performance guarantees by discussing a few key points in detail. If time permits I will show some results of the proposed algorithm obtained on simulated data.

Slides can be found here: RPCA
Paper available at: “https://arxiv.org/pdf/1410.7660v1.pdf“.

______________________________________________________
Date: 2nd December
Time: 3:00pm to 4:00pm
Venue: 3138, Coover Hall (2222 is unavailable this week)

Weekly Seminar – 11/18/2016

Gauri Jagatap, from Dr. Chinmay Hegde’s group will be presenting an overview of phase retrieval problems in signal processing, this Friday. She will primarily speak on a popular phase recovery strategy called AltMinPhase, based on the paper “Phase Retrieval Using Alternating Minimization” by Praneeth Netrapalli, Prateek Jain, and Sujay Sanghavi.

She will later also introduce a newer approach for phase retrieval of sparse signals, called “Efficient Compressive Phase Retrieval with Constrained Sensing Vectors“, by Sohail Bahmani, Justin Romberg.

Phase retrieval is essentially the problem of recovering the phase of a signal from magnitude measurements. In several applications in crystallography, optics, spectroscopy and tomography, it is harder or infeasible to record the phase of measurements, while recording the magnitudes is significantly easier.

Date: 11/18/2016

Time: 3:00pm – 4:00pm

Venue: 2222, Coover

Slides:Phase Retrieval (updated)

Weekly Seminar – 11/11/2016

Davood Hajinezhad from Dr. Mingyi Hong’s group will be presenting this week on “A Nonconvex Primal-Dual Splitting Method for Distributed and Stochastic Optimization”. Details of the talk are as given below:

Abstract: We study a stochastic and distributed algorithm for nonconvex  problems whose objective consists of a sum of $latex N$ nonconvex $latex L_i/N$-smooth functions, plus a  nonsmooth regularizer. The proposed NonconvEx primal-dual SpliTTing (NESTT) algorithm splits the problem into $latex N$ subproblems, and utilizes an augmented Lagrangian based primal-dual scheme to solve it in a distributed and stochastic manner. With a special non-uniform sampling, a version of NESTT achieves $latex \epsilon$-stationary solution  using $latex O((\sum_{i=1}^N\sqrt{L_i/N})^2/\epsilon)$ gradient evaluations, which can be up to $latex O(N)$ times better than the (proximal) gradient descent methods. It also achieves Q-linear convergence rate for nonconvex $latex \ell_1$ penalized quadratic problems with polyhedral constraints. Further, we reveal  a fundamental connection between primal-dual based methods and a few primal only methods such as IAG/SAG/SAGA.

Date: 11th November
Venue: 2222, Coover
Time: 3:00pm to 4:00pm
Slides: NESTT

Weekly Seminar – 11/04/2016

Han Guo from Dr. Namrata Vaswani’s group will be presenting this week, on the topic of Video Denoising and Enhancement via Dynamic Sparse and Low-rank Matrix Decomposition. Details of the talk are as follows:

Abstract: Video denoising refers to the problem of removing “noise” from a video sequence. Here the term “noise” is used in a broad sense to refer to any corruption or outlier or interference that is not the quantity of interest. In this work, we develop a novel approach to video denoising that is based on the idea that many noisy or corrupted videos can be split into three parts – the “low-rank layer”, the “sparse layer”, and everything else (which is small and bounded). We show, using extensive experiments, that our denoising approach ReLD (ReProCS-based Layering Denoising) outperforms the state-of-the art denoising algorithms.
Date: 4th November
Venue: 2222, Coover
Time: 3:00pm to 4:00pm
You can find the slides here.

Weekly Seminar – 10/28/2016

Pan Zhong from Dr. Zhengdao Wang’s group will be presenting this week. She will be speaking on an interesting topic in statistical learning theory called VC dimension. Details of the talk are as listed below:

Venue: Coover, 2222
Time: 3:00pm to 4:00pm
Date: 10/28/2016

References:
i. Chapter 4 of Statistical Learning Theory by Vapnik, Vladimir Naumovich.
ii. Carlos Guestrin’s slides from the Fall 2007 course on Machine Learning at CMU.

You can also find the slides for the talk here.

Attacking discrimination with smarter machine learning

Some exciting research going on at the Big Picture research group at Google:


As machine learning is increasingly used to make important decisions across core social domains, the work of ensuring that these decisions aren’t discriminatory becomes crucial.

Here we discuss “threshold classifiers,” a part of some machine learning systems that is critical to issues of discrimination. A threshold classifier essentially makes a yes/no decision, putting things in one category or another. We look at how these classifiers work, ways they can potentially be unfair, and how you might turn an unfair classifier into a fairer one. As an illustrative example, we focus on loan granting scenarios where a bank may grant or deny a loan based on a single, automatically computed number such as a credit score…

Read more about it here.

Source: research.google.com

ISU Data Science Reading Group

We’re a group of graduate students from Iowa State University, with underlying interests in solving some cool data science problems.

We meet weekly on Fridays, from 12:00 pm to 1:00 pm at Coover, 3043, Black 2004 (this week onward) to discuss varied problems in the intersection of our research areas (check the Research tab for more info). Occasionally, we enjoy active brainstorming sessions over cups of medium roast coffee.

You can find all material related to our seminars, including slides and references in the List of Talks tab.