Intel AI Research at ICML

Intel is a sponsor of the 36th International Conference on Machine Learning (ICML). At ICML, you’ll discover cutting-edge research on all aspects of machine learning used in AI, statistics and data science, as well as applications like machine vision, computational biology, speech recognition, and robotics.


Tuesday June 11, 2019

Title Time Location Authors Abstract
Collaborative Evolutionary Reinforcement Learning (CERL) 2:35PM - 2:40PM Deep RL Session – Hall B

Shaw Khadka – Intel, Somdeb Majumdar – Intel, Zach Dwiel – Terran Robotics, Evren Tumer – Intel, Santiago Miret – Intel, Yinyin Liu – Intel, Kagan Tumer – Oregon State University, Tarek Nassar

(Oral presentation) CERL is a sample-efficient Reinforcement Learning framework that combines gradient-based and gradient-free learning. CERL exceeds the performance of either of these two approaches in terms of sample efficiency and sensitivity to hyper-parameters.

Paper ›

Non-Parametric Priors for Generative Adversarial Networks 3:05PM - 3:10PM GAN Session – Hall A

Martin Braun – Intel, Ravi Garg – Intel, Rajhans Singh, Pavan Turaga, Suren Jayasuriya – Arizona State University

(Oral presentation) This paper proposes a novel prior which is derived using basic theorems from probability theory and off-the-shelf optimizers, to improve fidelity of image generation using GANs by interpolating along any Euclidean straight line without any additional training and architecture modifications.

Paper ›

Accepted Paper Presentations - Day 4

Wednesday June 12, 2019

Title Time Location Authors Abstract
Parameter Efficient Training of Deep Convolutional Neural Networks by Dynamic Sparse Reparameterization 12:10PM- 12:15PM Applications Session – Room 201

Hesham Mostafa – Intel, Xin Wang – Intel

(Oral presentation) We describe a heuristic for modifying the structure of sparse deep convolutional networks during training that allows us to train sparse networks directly to reach accuracies on par with accuracies obtained through compressing/pruning of large, dense models.

Paper ›

Accepted Paper Presentations - Day 6

Friday June 14, 2019

Title Time Location Authors Abstract
Learning a Hierarchy of Neural Connections for Modeling Uncertainty 8:30AM - 6:00PM Uncertainty & Robustness in Deep Learning Workshop – Hall B

Raanan Yehezkel – Intel, Yaniv Gurwicz – Intel, Shami Nisimov – Intel, Gal Novik – Intel

Quantifying and measuring uncertainty in deep neural networks is an open problem. In this paper we propose a new deep architecture, and demonstrate that it enables estimating various types of uncertainties.

Accepted Paper Presentations - Day 7

Saturday June 15, 2019

Title Time Location Authors Abstract
Goal-conditioned Imitation Learning 11:00PM - 12:00PM Adaptive & Multitask Workshop

Yiming Ding – UC Berkeley, Carlos Florensa – UC Berkeley, Mariano Phielipp – Intel AI Lab, Pieter Abbeel – UC Berkeley and Covariant

Solving challenge Robotics like environments in Reinforcement Learning using few demonstrations and self-supervision.

Paper ›

Privacy Preserving Adjacency Spectral Embedding on Stochastic Blockmodels 3:30PM - 4:30PM

Learning and Reasoning with Graph-Structured Representations Workshop

Li Chen – Intel

For graphs generated from stochastic blockmodels, adjacency spectral embedding is asymptotically consistent. The methodology presented in this paper can estimate the latent positions by adjacency spectral embedding and achieve comparable accuracy at desired privacy parameters in simulated and real world networks.

Paper ›

Sparse Representation Classification via Screening for Graphs 3:30PM - 4:30PM Learning and Reasoning with Graph-Structured Representations Workshop Censheng Shen, Li Chen – Intel, Carey Priebe, Yuexiao Dong

In this paper we propose a new implementation of the sparse representation classification (SRC) via screening, establish its equivalence to the original SRC under regularity conditions, and prove its classification consistency under a latent subspace model.

Paper ›

Expo Day - Day 1

Sunday June 9, 2019

Title Time Location Abstract
Reaching Intent Estimation via Approximate Bayesian Computation 2:00PM - 6:30PM Room 101 Demo: This interactive demo shows a system that provides real-time user intent estimation. When the user places an object on the table, the system will estimate the intended placement location and represent it as a probability density function. The system is composed of three elements: an object tracker, a model-based physically plausible trajectory generator and a probability function. The user is captured through an Intel® Realsense™ camera and the intent is obtained through approximate Bayesian computation in an analysis by synthesis approach.
Optimize Deep Learning on Apache Spark with Intel® DL Boost Technology and Intel® Parallel Studio 3:00PM- 4:00PM

Grand Ballroom

Talk: Thanks to Intel DL Boost technology with new Vector Neural Network Instructions (VNNI), deep learning inference performance in BigDL is dramatically improved on 2nd gen Intel® Xeon® Scalable processors. We will showcase the VGG-16 Fp32/Int8 throughput improvement and how to use Intel Parallel Studio to profile and optimize DL workloads.
NLP Architect by Intel® AI Lab 5:30PM - 6:30PM Grand Ballroom Session: NLP Architect is an open-source Python library for exploring the state-of-the-art deep learning topologies and techniques for natural language processing and natural language understanding. In this session, we will discuss NLP Architect features and demonstrate how easily non-ML/NLP developers can build advanced NLP applications such as unsupervised Aspect-Based Sentiment Analysis (ABSA), Set-Term Expansion, and Topic & Trend extraction.

More Ways to Engage

Follow us @IntelAI and @IntelAIResearch for more updates from @ICMLconf and the Intel AI research team!