Low-latency Speculative Inference On Distributed Multi-modal Data Streams (MobiSys 2021)
Abstract: While multi-modal deep learning is useful in distributed sensing tasks like human tracking, activity recognition, and audio and video analysis, deploying state-of-the-art multi-modal models in a wirelessly networked sensor system poses unique challenges. The data sizes for different modalities can be highly asymmetric (e.g., video vs. audio), and these differences can lead to significant delays between streams in the presence of wireless dynamics. Therefore, a slow stream can significantly slow down a multi-modal inference system in the cloud, leading to either increased latency (when blocked by the slow stream) or degradation in inference accuracy (if inference proceeds without waiting). In this paper, we introduce speculative inference on multi-modal data streams to adapt to these asymmetries across modalities. Rather than blocking inference until all sensor streams have arrived and been temporally aligned, we impute any missing, corrupt, or partially-available sensor data, then generate a speculative inference using the learned models and imputed data. A rollback module looks at the class output of speculative inference and determines whether the class is sufficiently robust to incomplete data to accept the result; if not, we roll back the inference and update the model’s output. We implement the system in three multi-modal application scenarios using public datasets. The experimental results show that our system achieves 7 − 128× latency speedup with the same accuracy as six state-of-the-art methods