Keynote Speakers

Three exciting keynotes are confirmed to be given by Pete Warden (Google Brain), Paul Whatmough (ARM and Harvard) and Dimitris Papailiopoulos (University of Wisconsin-Madison) -- which will cover a rich spectrum of the embedded and mobile deep learning space.

Emerging Techniques for Constrained-Resource Deep Learning
Pete Warden
Google Brain

This talk will cover a variety of software, hardware, and modeling approaches for adapting deep learning to mobile and embedded platforms with limited compute, storage, and memory budgets. These will include recent advances in quantization and compression during training, new software frameworks like Arm's CMSIS, and low-power hardware like DSPs, microcontrollers, and Lattice FPGAs.

Pete Warden is the technical lead of the mobile and embedded TensorFlow group, in Google's Brain team.


Energy-Efficient Neural Network Inference for The Embedded Masses
Paul Whatmough
Arm ML Research and Harvard University

Machine Learning techniques such as deep neural networks (DNNs) are empowering sensor-rich IoT and Mobile devices with the capability to interpret the complex, noisy data which describes the real world around them. However, DNNs in their various forms require a large number of arithmetic operations and occupy a large memory footprint. IoT and Mobile form-factors are customarily battery powered, and therefore the implementation of DNNs must be very energy efficient. Fortunately, DNN inference workloads are amenable to a number of major implementation optimizations including: aggressive parallelization, quantizing to small data types, exploiting sparse data, maximizing reuse of local data, and fault tolerance. In this talk, we will explore these optimizations in various contexts, touching on: DNN architecture, hardware accelerators, system optimizations and design and simulation tools.

Paul Whatmough received the B.Eng. degree (with first class Honours) from the University of Lancaster, UK, the M.Sc. degree (with distinction) from the University of Bristol, UK, and the Doctorate degree from University College London, UK. He has previously held positions in the Wireless Group at Philips/NXP Research Labs (UK), the Silicon R&D Group at ARM (UK), and Harvard University (USA), working in the areas of Machine Learning, DSP, Wireless, Embedded Computing, Variation Tolerance, and Supply Voltage Noise. Currently, he leads research in ML at Arm Research Boston, and is an Associate at Harvard University.


Overcoming the Challenges of Distributed Learning
Dimitris Papailiopoulos
University of Wisconsin-Madison

In this talk, I will highlight a few key challenges in distributed machine learning. I will first focus on communication overheads during model training, and discuss how they lead to poor speedup gains when scaling out to hundreds of compute nodes. I will present theoretical insights which suggest that we can only overcome these scaling issues by either building new classes of neural networks, or by designing novel training algorithms that require far less communication. We will then focus on issues of robustness and reliability, and discuss how the training process itself is susceptible to hardware failures, straggler nodes, and adversarial attacks. I will explain how simple algebraic ideas, borrowed from the area of error correcting codes, can be used to enable robust distributed training. I will conclude with several open problems that lie in the intersection of machine learning, large-scale optimization, and distributed systems.

Dimitris Papailiopoulos is an Assistant Professor of Electrical and Computer Engineering and Computer Sciences (by courtesy) at the University of Wisconsin-Madison, and a Faculty Fellow of the Grainger Institute for Engineering. Between 2014 and 2016, Dimitris was a postdoctoral researcher at UC Berkeley and a member of the AMPLab. His research interests span machine learning, information theory, and distributed systems, with a current focus on communication-avoiding training algorithms and erasure codes for robust distributed optimization. Dimitris earned his Ph.D. in ECE from UT Austin in 2014, under the supervision of Alex Dimakis. In 2015, he received the IEEE Signal Processing Society, Young Author Best Paper Award. In 2018, Dimitris was a founding organizer and Program Co-Chair for SysML, a new conference that targets research at the intersection of machine learning and systems.