Navigation

Program

Monday, Oct 14, 2024

8:00 - 17:00    Registration
    Worshop-ASI
    Worshop-AIoT
    Worshop-SCRL

Tuesday, Oct 15, 2024

8:00 - 17:00    Registration
8:30 - 9:00    Opening Remarks

Abstract: Artificial Intelligence (AI) is revolutionizing Cloud Computing and Networking. This keynote will explore Acceleration as a Service (XaaS), a paradigm providing GPU support for AI and High-Performance Computing (HPC) workloads in cloud environments. XaaS empowers communities to manage and distribute containerized high-performance software efficiently across multiple clouds. We will highlight the critical role of network performance, introducing a novel method to measure the latency sensitivity of applications. This sets the stage for an in-depth analysis of AI workloads in networked systems, revealing the immense demands these applications place on current infrastructure. Today’s Ethernet-based datacenter and cloud networks fall short in supporting large-scale AI training, inference, and HPC workloads. To address this, we will present Ultra Ethernet, an emerging industry standard poised to elevate Ethernet as the ideal interconnect for AI and HPC cloud environments.

10:00 - 10:30    Coffee Break
10:30 - 12:00    Session 1: IoT Systems

Session Chair: Jun Zhao (Nanyang Technological University)

BeamCount: Indoor Crowd Counting Using Wi-Fi Beamforming Feedback Information
Paper

Siyu Chen, Hongbo Jiang (Hunan University); Jie Xiong (University of Massachusetts Amherst); Hu Jingyang (Hunan University); Penghao Wang, Chao Liu (Ocean University of China); Zhu Xiao (Hunan University); Bo Li (Hong Kong University of Science and Technology)

Abstract: Real-time indoor crowd counting plays an important role in many applications such as crowd control, resource allocation and advertisement. Current research predominantly relies on camera-based methods. However, computer vision-based solutions raise severe privacy and ethical concerns. In this paper, we propose a privacy-preserving counting solution called BeamCount based on Wi-Fi sensing. Instead of using conventional Wi-Fi Channel State Information (CSI) readings, we utilize Wi-Fi Beamforming Feedback Information (BFI) for crowd counting estimation. Compared to CSI which can only be extracted from few commodity Wi-Fi cards (e.g., Intel 5300), BFI readings can be obtained from a large range of commodity Wi-Fi devices. We establish a mapping relationship between BFI and headcount and extract headcounts from BFI inputs through a carefully designed adversarial network. Owing to the adversarial network's cross-domain capability, the proposed counting system can achieve high accuracy across different environments, demonstrating its generalization capability. To mitigate the effect of BFI compression on sensing performance, we adopt a novel time series prediction model. Extensive real-world experiments validate the effectiveness of BeamCount in various environments, achieving an average counting accuracy of 93.6%.

Abstract: Human sensing has gained increasing attention in various applications. Among the available technologies, visual images offer high accuracy, while sensing on the RF spectrum preserves privacy, creating a conflict between imaging resolution and privacy preservation. In this paper, we explore thermal array sensors as an emerging modality that strikes an excellent resolution-privacy balance for ubiquitous sensing. To this end, we present TADAR, the first multi-user Thermal Array-based Detection and Ranging system that estimates the inherently missing range information, extending thermal array outputs from 2D thermal pixels to 3D depths and empowering them as a promising modality for ubiquitous privacy-preserving human sensing. We prototype TADAR using a single commodity thermal array sensor and conduct extensive experiments in different indoor environments. Our results show that TADAR achieves a mean F1 score of 88.8% for multi-user detection and a mean accuracy of 32.0 cm for multi-user ranging, which further improves to 20.1 cm for targets located within 3 m. We conduct two case studies on fall detection and occupancy estimation to showcase the potential applications of TADAR. We hope TADAR will inspire the vast community to explore new directions of thermal array sensing, beyond wireless and acoustic sensing. TADAR is open-sourced on GitHub: https://github.com/aiot-lab/TADAR.

Towards Ubiquitous IoT through Long Range Wireless Energy Harvesting
Paper

Mohamed Ibrahim (Hewlett Packard Labs/Carnegie Mellon University); Atul Bansal, Kuang Yuan, Junbo Zhang, Swarun Kumar (Carnegie Mellon University)

Abstract: Extending the range of RF energy harvesting can revolutionize battery-free/low-power sensing and networking. This paper explores the design space for RF infrastructure to charge battery-free devices (e.g. RFID) or devices with coin-cell batteries (e.g. water and security sensors) over much longer range than the state-of-the-art. Rather than rely completely on ambient RF (e.g. TV towers) or dedicated infrastructure (e.g. RFID readers), we explore a middle path - combine RF energy from (nearly) all available major wireless frequency bands and then supplement this with low-cost specially designed RF charging infrastructure to fill in any gaps. We propose SCALED, an architecture that extends the energy-harvesting range of ultra-low-power platforms. We explore two design questions: (1) The design of the energy harvester, including which ambient frequency bands to combine as well as overall modalities to combine energy rather than focus on one band; (2) The design of radio infrastructure to fill-in the coverage gaps of ambient energy, with minimal interference to communication sources. We implement and study SCALED through a multi-pronged evaluation including building-scale proof-of-concept deployments.

BleHe: Indoor Positioning of a Single Bluetooth Station Based on Height Estimation
Paper

Hanying ZOU, Xiaolong Zheng, Liang Liu, Huadong Ma (Beijing University of Posts and Telecommunications)

Abstract: The Bluetooth 5.1 specification introduces the Angle of Arrival feature, which significantly enhances its applications in indoor positioning. Given the height of the target, a single BLE (Bluetooth Low Energy) base station can locate the target, thereby reducing deployment costs. However, existing methods often assume that the target's height is known in advance and remains constant, which is not always true in practice. The height can vary significantly due to user posture changes, such as when picking up a phone from a pocket, leading to positioning errors. In this paper, we propose BleHe, a novel indoor positioning system that incorporates height correction to enhance positioning accuracy in scenarios with dynamic height changes. BleHe detects height changes by utilizing on-device IMU sensor and subsequently notifies the base station of any detected height change events. To avoid altering the commodity Bluetooth protocols, rather than directly modifying application data, we create a side channel to delivery the height change information from the device to the base station by adjusting the BLE packet transmission frequency. This method allows the base station to infer the start and end times of height changes and subsequently refine the target's trajectory using a height-aware particle filtering-based positioning correction method that we propose. Experimental results demonstrate that BleHe achieves an average positioning error of 34.4cm, even in scenarios involving posture changes during walking.

12:00 - 13:30    Lunch
13:30 - 15:00     Session 2: Multi-armed Bandits

Session Chair: Jian Li (Stony Brook University)

Abstract: This paper studies multi-stage systems with end-to-end bandit feedback. In such systems, each job needs to go through multiple stages, each managed by a different agent, before generating an outcome. Each agent can only control its own action and learn the final outcome of the job. It has neither knowledge nor control on actions taken by agents in the next stage. The goal of this paper is to develop distributed online learning algorithms that achieve sublinear regret in adversarial environments. The setting of this paper significantly expands the traditional multi-armed bandit problem, which considers only one agent and one stage. In addition to the exploration-exploitation dilemma in the traditional multi-armed bandit problem, we show that the consideration of multiple stages introduces a third component, education, where an agent needs to choose its actions to facilitate the learning of agents in the next stage. To solve this newly introduced exploration-exploitation-education trilemma, we propose a simple distributed online learning algorithm, ϵ-EXP3. We theoretically prove that the ϵ-EXP3 algorithm is a no-regret policy that achieves sublinear regret. Simulation results show that the ϵ-EXP3 algorithm significantly outperforms existing no-regret online learning algorithms for the traditional multi-armed bandit problem.

On the Analysis of Two-Stage Stochastic Bandit
Paper

Yumou Liu (The Chinese University of Hong Kong, Shenzhen); Haoming Li, Zhenzhe Zheng, Fan Wu, Guihai Chen (Shanghai Jiao Tong University)

Abstract: Two-stage bandit-based algorithms have found widespread application in modern online platforms, offering a balance between cost and accuracy. The initial stage involves coarse filtering of a small candidate set of promising items from a large corpus, while the subsequent stage refines the selection and presents a single item to the user. In this work, to the best of our knowledge, we for the first time undertake a theoretical analysis of the two-stage stochastic multi-armed bandit problem. Specifically, we model the two-stage bandit problem as a two-stage online optimization, and conduct a theoretical analysis. We demonstrate that while the optimization objective of the first stage may seem intuitive, it is, in fact, non-trivial. We devise a proxy optimization objective, emphasize the importance of a carefully designed exploration strategy, and establish the theoretical analysis for the application of Upper Confidence Bound (UCB)-based algorithms in the first stage. Furthermore, we provide a regret analysis of the proposed two-stage bandit algorithm, demonstrating a gap-dependent upper bound of [EQUATION], where [EQUATION] is the largest reward gap, and a gap-independent lower bound of [EQUATION], where n represents the horizon.

Risk-Aware Multi-Agent Multi-Armed Bandits
Paper

Qi Shao (The Chinese University of Hong Kong); Jiancheng Ye (The University of Hong Kong); John C.S. Lui (The Chinese University of Hong Kong)

Abstract: Multi-armed bandits (MAB) is an online learning and decisionmaking model under uncertainty. Instead of maximizing the expected utility (or reward) in a classical MAB setting, the variance of the utility should be considered when making risk-aware decisions. In this paper, we propose a risk-aware multi-agent MAB (MAMAB) model, which considers both the "independent" and "correlated" risk when multiple agents make arm-pulling decisions. Specifically, the system includes a platform that owns a number of tasks (or arms) awaiting a group of agents to accomplish. We show how to calculate the arm-pulling strategy of agents with potentially different eligible arm sets under a Nash equilibrium point. From the perspective of the platform, each arm has its maximal capacity to accommodate arm-pulling agents. We design the platform's optimal payment algorithms for its risk-aware revenue maximization (a regret minimization) under both independent and correlated risks. We prove that our algorithms achieve the sub-linear regret under independent risks when the platform can or cannot differentiate the utility on each arm. We also prove that our algorithm achieves the sublinear regret under correlated risks. We also carry out experiments to quantify the merits of our algorithms for various networking applications, such as crowdsourcing and edge computing.

Abstract: Scheduling in multi-channel wireless communication system presents formidable challenges in effectively allocating resources. To address these challenges, we investigate a multi-resource restless matching bandit (MR-RMB) model for heterogeneous resource systems with an objective of maximizing long-term discounted total rewards while respecting resource constraints. We have also generalized to applications beyond multi-channel wireless. We discuss the Max-Weight Index Matching algorithm, which optimizes resource allocation based on learned partial indexes. We have derived the policy gradient theorem for index learning. Our main contribution is the introduction of a new Deep Index Policy (DIP), an online learning algorithm tailored for MR-RMB. DIP learns the partial index by leveraging the policy gradient theorem for restless arms with convoluted and unknown transition kernels of heterogeneous resources. We demonstrate the utility of DIP by evaluating its performance for three different MR-RMB problems. Our simulation results show that DIP indeed learns the partial indexes efficiently.

15:00 - 15:30    Coffee Break
15:30 - 17:00    Session 3: Edge Computing Systems

Session Chair: Ting He (Penn State University)

Resource Allocation for Stable LLM Training in Mobile Edge Computing
Paper

Chang Liu, Jun Zhao (Nanyang Technological University)

Abstract: As mobile devices increasingly become focal points for advanced applications, edge computing presents a viable solution to their inherent computational limitations, particularly in deploying large language models (LLMs). However, despite the advancements in edge computing, significant challenges remain in efficient training and deploying LLMs due to the computational demands and data privacy concerns associated with these models. This paper explores a collaborative training framework that integrates mobile users with edge servers to optimize resource allocation, thereby enhancing both performance and efficiency. Our approach leverages parameter-efficient fine-tuning (PEFT) methods, allowing mobile users to adjust the initial layers of the LLM while edge servers handle the more demanding latter layers. Specifically, we formulate a multi-objective optimization problem to minimize the total energy consumption and delay during training. We also address the common issue of instability in model performance by incorporating stability enhancements into our objective function. Through novel fractional programming technique, we achieve a stationary point for the formulated problem. Simulations demonstrate that our method reduces the energy consumption as well as the latency, and increases the reliability of LLMs across various mobile settings.

Efficient Multi-Dimensional Compression for Network-Edge Classification
Paper

Chengzhang Li, Peizhong Ju, Atilla Eryilmaz, Ness Shroff (The Ohio State University)

Abstract: The widespread adoption of low-cost resource-constrained edge devices and high-performance expensive servers necessitates shifting the complexity burden from edge devices to servers. However, in many applications such as image classification, it is often impractical and communication expensive to transmit full information without any form of compression. To address this issue, this paper introduces a neural network (NN)-based compression technique tailored for resource-constrained edge devices for classification at the network edge. The core idea involves simultaneously training a shallow neural network to-be-implemented by the devices and a deep neural network to-be-implemented by the server. To adapt to the time-varying channel conditions, the compression algorithm at the device side must be able to handle multiple output dimensions. To address this issue, we develop two multi-dimensional compression strategies: the multiple codebook approach, using separate NNs for various dimensions, and the single codebook approach, utilizing one NN for all dimensions. The single codebook approach substantially reduces the storage demands on the device, offering a viable solution for low-cost edge devices. Our analysis offers a theoretical performance guarantee, highlighting that the accuracy of the single codebook approach is comparable to that of the multiple codebook strategy. Through empirical evaluations on real-world datasets, we demonstrate that the single codebook approach achieves near-equivalent performance to the accuracy multiple codebook alternative.

Structured Reinforcement Learning for Media Streaming at the Wireless Edge
Paper

Archana Bura (University of California, San Diego); Sarat Chandra Bobbili (Texas A&M University); Shreyas Rameshkumar (Bloomberg LP); Desik Rengarajan (Hewlett Packard Labs); Dileep Kalathil, Srinivas Shakkottai (Texas A&M University)

Abstract: Media streaming is the dominant application over wireless edge (access) networks. The increasing softwarization of such networks has led to efforts at intelligent control, wherein application-specific actions may be dynamically taken to enhance the user experience. The goal of this work is to develop and demonstrate learning-based policies for optimal decision making to determine which clients to dynamically prioritize in a video streaming setting. We formulate the policy design question as a constrained Markov decision problem (CMDP), and by using a Lagrangian relaxation we decompose it into single-client problems. Further, the optimal policy takes a threshold form in the video buffer length. We then derive a natural policy gradient (NPG) based constrained reinforcement learning (CRL) algorithm using the structure of our problem, and show that it converges to the globally optimal policy. We then develop a simulation environment for training, and a real-world intelligent controller attached to a WiFi access point for evaluation. We demonstrate using youtube media streaming experiments that our policy can increase the user quality of experience by over 30%. Furthermore, we show that the structured learning is fast, and can be easily deployed, taking only about 15μs to execute.

Abstract: With the rapid advancement of large models and mobile edge computing, transfer learning, particularly through fine-tuning, has become crucial for adapting models to downstream tasks. Traditionally, this requires users to share their data with model owners for fine-tuning, which is not only costly but also raises significant privacy concerns. Furthermore, fine-tuning large-scale models is computationally intensive and often impractical for many users. To tackle these challenges, we introduce a system that combines offsite-tuning with physical-layer security, which provides local data owners with a lightweight adapter and a compressed emulator. Data owners then fine-tune the adapter locally and securely send it back to the model owners through a confidential channel for integration, ensuring privacy and resource conservation. Our paper focuses on optimizing computational resource allocation among data owners and the large model owner deployed on edge, and on the compression ratio of adapters. We incorporate a secrecy uplink channel to maximize the utility that we defined while minimizing system costs like energy consumption and delay. The optimization uses the Dinkelbach algorithm, fractional programming, successive convex approximation and alternating optimization. Experiments demonstrate our algorithm's superiority over baseline methods.

17:00 - 18:00    Poster / Demos Session
18:00 - 20:00    Welcome Reception (Titania hotel https://www.titania.gr/)


Wednesday, Oct 16, 2024

08:00 - 17:00    Registration

Abstract: In order to cope with the vast amount of data and processing requirements of distributed intelligence services novel synergies between networking and AI need to be leveraged; I will present several recent results in that area. First we look into the joint optimization of service placement and request routing in dense multi-cell wireless networks. The storage requirements of the different services as well as the processing and communication requirements of the service requests are considered. An algorithm that achieves close-to-optimal performance using a randomized rounding technique is presented. Chains of virtual network functions are considered then and allocation algorithms that comply to the related precedence constraints are presented. Distributed learning is considered then as a representative AI service at the network edge. An algorithm that addresses the heterogeneity of the different clients is presented towards providing unbiased services. Then we look into the issue of limited network resources for training as well as the asymmetric access profiles of the different clients and provide an algorithm for client selection to mitigate the effect of asymmetries. Finally we will present some recent results on combining graph Neural Networks (NN) and transformers for spatio-temporal networks predictions and optimization.

10:00 - 10:30    Coffee Break
10:30 - 12:00    Session 4: Federated Learning

Session Chair: I-Hong Hou (Texas A&M University)

Overlay-Based Decentralized Federated Learning in Bandwidth-Limited Networks
Paper

Yudi Huang, Tingyang Sun, Ting He (Pennsylvania State University)

Abstract: The emerging machine learning paradigm of decentralized federated learning (DFL) has the promise of greatly boosting the deployment of artificial intelligence (AI) by directly learning across distributed agents without centralized coordination. Despite significant efforts on improving the communication efficiency of DFL, most existing solutions were based on the simplistic assumption that neighboring agents are physically adjacent in the underlying communication network, which fails to correctly capture the communication cost when learning over a general bandwidth-limited network, as encountered in many edge networks. In this work, we address this gap by leveraging recent advances in network tomography to jointly design the communication demands and the communication schedule for overlay-based DFL in bandwidth-limited networks without requiring explicit cooperation from the underlying network. By carefully analyzing the structure of our problem, we decompose it into a series of optimization problems that can each be solved efficiently, to collectively minimize the total training time. Extensive data-driven simulations show that our solution can significantly accelerate DFL in comparison with state-of-the-art designs.

Social Welfare Maximization for Federated Learning with Network Effects
Paper

Xiang Li, Yuan Luo (Shenzhen Institute of Artificial Intelligence and Robotics for Society, The Chinese University of Hong Kong, Shenzhen); Bing Luo (Duke Kunshan University); Jianwei Huang (Shenzhen Institute of Artificial Intelligence and Robotics for Society, The Chinese University of Hong Kong, Shenzhen)

Abstract: A proper mechanism design can help federated learning (FL) to achieve good social welfare by coordinating self-interested clients through the learning process. However, existing mechanisms neglect the network effects of client participation, leading to suboptimal incentives and social welfare. This paper addresses this gap by exploring network effects in FL incentive mechanism design. We establish a theoretical model to analyze FL model performance and quantify the impact of network effects on heterogeneous client participation. Our analysis reveals the non-monotonic nature of FL network effects. To leverage such effects, we propose a model trading and sharing (MTS) framework that allows clients to obtain FL models through participation or purchase. To tackle heterogeneous clients' strategic behaviors, we further design a socially efficient model trading and sharing (SEMTS) mechanism. Our mechanism achieves social welfare maximization solely through customer payments, without additional incentive costs. Experimental results on an FL hardware prototype demonstrate up to 148.86% improvement in social welfare compared to existing mechanisms.

Can We Theoretically Quantify the Impacts of Local Updates on the Generalization Performance of Federated Learning?
Paper

Peizhong Ju (The Ohio State University); Haibo Yang (Rochester Institute of Technology); Jia Liu, Yingbin Liang, Ness Shroff (The Ohio State University)

Abstract: Federated Learning (FL) has gained significant popularity due to its effectiveness in training machine learning models across diverse sites without requiring direct data sharing. While various algorithms along with their optimization analyses have shown that FL with local updates is a communication-efficient distributed learning framework, the generalization performance of FL with local updates has received comparatively less attention. This lack of investigation can be attributed to the complex interplay between data heterogeneity and infrequent communication due to the local updates within the FL framework. This motivates us to investigate a fundamental question in FL: Can we quantify the impact of data heterogeneity and local updates on the generalization performance for FL as the learning process evolves? To this end, we conduct a comprehensive theoretical study of FL's generalization performance using a linear model as the first step, where the data heterogeneity is considered for both the stationary and online/non-stationary cases. By providing closed-form expressions of the model error, we rigorously quantify the impact of the number of the local updates (denoted as K) under three settings (K = 1, K < ∞, and K = ∞) and show how the generalization performance evolves with the number of rounds t. Our investigation also provides a comprehensive understanding of how different configurations (including the number of model parameters p and the number of training samples n) contribute to the overall generalization performance, thus shedding new insights (such as benign overfitting) for implementing FL over networks.

Federating from History in Streaming Federated Learning
Paper

Ruirui Zhang, Yifei Zou, Zhenzhen Xie, Xiao Zhang (Shandong University); Peng Li (the University of Aizu); Zhipeng Cai (Georgia State University); Xiuzhen Cheng, Dongxiao Yu (Shandong University)

Abstract: To address the online learning problem in distributed systems, Streaming Federated learning (SFL) enables immediate model training by clients upon collecting new data, finding wide applications in AI-enabled Internet-of-Things and sensor networks. Given the variability in data distribution across different historical periods, the ability to recall and rapidly apply previously encountered data distributions significantly enhances the efficiency and accuracy of model training. In this paper, a demo based on the real-world temperature datasets is presented to demonstrate the importance of history knowledge in local training and the federating process of SFL, which also shows that vanilla federated learning without considering the history knowledge may even be harmful to model training. Observing this, we propose Fed-HIST, a Federated learning framework that enables the clients to learn from the HISTory knowledge of the whole distributed learning system. Unlike direct raw data storage, Fed-HIST employs model architectures to capture the data distributions, offering a more space-efficient and privacy-preserving method of knowledge storage on a server pool. Additionally, a model similarity comparison scheme is designed to retrieve beneficial knowledge from the pool uploaded by the clients in the past. Such a history-aware federation can enhance the efficiency of training each client, only requiring the recurrence of similar data distributions among SFL participants. We validate our framework through extensive simulations on MNIST, Fashion-MINST, CIFAR10, and CIFAR100 datasets, benchmarking against 9 baselines and highlighting the importance of federating from history in SFL problem through necessary ablation studies.

12:00 - 13:30    Lunch
13:30- 15:00    Session 5: Resource Allocation and Scheduling

Session Chair: Iordanis Koutsopoulos (Athens University of Economics and Business)

Sustainable Provision of URLLC Services for V2N: Analysis and Optimal Configuration
Paper

Livia Elena Chatzieleftheriou (IMDEA Networks); Jesús Pérez-Valero (Universidad Carlos III de Madrid); Jorge Martín-Pérez (Universidad Politécnica de Madrid); Pablo Serrano (Universidad Carlos III de Madrid)

Abstract: The rising popularity of Vehicle-to-Network (V2N) applications is driven by the Ultra-Reliable Low-Latency Communications (URLLC) service offered by 5G. The availability of distributed resources could be leveraged to handle the enormous traffic arising from these applications, but introduces complexity in deciding where to steer traffic under the stringent delay requirements of URLLC. In this paper, we introduce the V2N Computation Offloading and CPU Activation (V2N-COCA) problem, which aims at finding the computation offloading and the edge/cloud CPU activation decisions that minimize the operational costs, both monetary and energetic, under stringent latency constraints. Some challenges are the proven non-monotonicity of the objective function w.r.t. offloading decisions, and the no-existence of closed-formulas for the sojourn time of tasks. We present a provably tight approximation for the latter, and we design BiQui, a provably asymptotically optimal and with linear computational complexity w.r.t. computing resources algorithm for the V2N-COCA problem. We assess BiQui over real-world vehicular traffic traces, performing a sensitivity analysis and a stress-test. Results show that BiQui significantly outperforms state-of-the-art solutions, achieving optimal performance (found through exhaustive searches) in most of the scenarios.

Abstract: The problem of online scheduling of multi-server jobs is considered, where there are a total of K servers, and each job requires concurrent service from multiple servers for it to be processed. Each job on its arrival reveals its processing time, the number of servers from which it needs concurrent service, and an online algorithm has to make scheduling decisions at each time using only information revealed so far with the goal of minimizing the response/flow time. The worst case input model is considered and the performance metric is the competitive ratio. For the case when all job processing time (sizes) are the same, we show that the competitive ratio of any deterministic/randomized algorithm is at least Ω(K) and propose an online algorithm whose competitive ratio is at most K + 1. With unequal job sizes, we propose an online algorithm whose competitive ratio is at most 2K log(Kwmax), where wmax is the maximum size of any job. With equal job sizes, we also consider the resource augmentation regime where an online algorithm has access to more servers than an optimal offline algorithm. With resource augmentation, we propose a simple online algorithm and show that it has a competitive ratio of 1 when provided with 2K servers with respect to an optimal offline algorithm with K servers.

Abstract: We analyze the problem of scheduling in wireless networks to meet end-to-end service guarantees. Using network slicing to decouple the queueing dynamics between flows, we show that the network's ability to meet hard throughput and deadline requirements is largely influenced by the scheduling policy. We characterize the feasible throughput/deadline region for a flow under a fixed route and set of slices, and find throughput- and deadline-optimal policies for a solitary flow. We formulate the feasibility problem for multiple flows in a general topology, and show its equivalence to finding a bounded-cost cycle on an exponentially large graph, which is un-solvable in polynomial time by the best-known algorithm. Using a novel concept called delay deficit, we develop a sufficient condition for meeting deadlines as a function of inter-scheduling times, and show that regular schedules are optimal for satisfying this condition. Motivated by this, we design a polynomial-time algorithm that returns an (almost) regular schedule, optimized to meet service guarantees for all flows.

On Optimal Server Allocation for Moldable Jobs with Concave Speed-Up
Paper

Samira Ghanbarian (University of Waterloo, Canada); Arpan Mukhopadhyay (University of Warwick); Ravi R. Mazumdar (University of Waterloo, Canada); Fabrice M. Guillemin (Orange Labs, Lannion, France)

Abstract: A large proportion of jobs submitted to modern computing clusters and data centers are parallelizable and capable of running on a flexible number of computing cores or servers. Although allocating more servers to such a job results in a higher speed-up in the job's execution, it reduces the number of servers available to other jobs, which in the worst case, can result in an incoming job not finding any available server to run immediately upon arrival. Hence, a key question to address is: how to optimally allocate servers to jobs such that (i) the average execution time across jobs is minimized and (ii) almost all jobs find at least one server immediately upon arrival. To address this question, we consider a system with n servers, where jobs are parallelizable up to d(n) servers and the speed-up function of jobs is concave and increasing. Jobs not finding any available servers upon entry are blocked and lost. We propose a simple server allocation scheme that achieves the minimum average execution time of accepted jobs while ensuring that the blocking probability of jobs vanishes as the system becomes large (n → ∞). This result is established for various traffic conditions as well as for heterogeneous workloads. To prove our result, we employ Stein's method which also yields non-asymptotic bounds on the blocking probability and the mean execution time. Furthermore, our simulations show that the performance of the scheme is insensitive to the distribution of job execution times.

15:00 - 15:30    Coffee Break
15:30 - 17:00    Session 6: Wireless Systems and Sensing

Session Chair: Jeremy Gummeson (University of Massachusetts Amherst)

Abstract: In the rapidly evolving landscape of the Metaverse, enhanced by blockchain technology, the efficient processing of data has emerged as a critical challenge, especially in wireless communication systems. Addressing this need, our paper introduces the innovative concept of data processing efficiency (DPE), aiming to maximize processed bits per unit of resource consumption in blockchain-empowered Metaverse environments. To achieve this, we propose the DPE-Aware User Association and Resource Allocation (DAUR) algorithm, a tailored solution for these complex systems. The DAUR algorithm transforms the challenging task of optimizing the sum of DPE ratios into a solvable convex optimization problem. It uniquely alternates the optimization of key variables like user association, work offloading ratios, task-specific computing resource distribution, bandwidth allocation, user power usage ratios, and server computing resource allocation ratios. Our extensive numerical results demonstrate the DAUR algorithm's effectiveness in DPE.

REHSense: Towards Battery-Free Wireless Sensing via Radio Frequency Energy Harvesting
Paper

Tao Ni, Zehua Sun (City University of Hong Kong); Mingda Han (Shandong University); Yaxiong Xie (University at Buffalo, SUNY); Guohao Lan (TU Delft); Zhenjiang Li (City University of Hong Kong); Tao Gu (Macquarie University); Weitao Xu (City University of Hong Kong)

Abstract: Diverse Wi-Fi-based wireless applications have been proposed, ranging from daily activity recognition to vital sign monitoring. Despite their remarkable sensing accuracy, the high energy consumption and the requirement for customized hardware modification hinder the wide deployment of the existing sensing solutions. In this paper, we propose REHSense, an energy-efficient wireless sensing solution based on Radio-Frequency (RF) energy harvesting. Instead of relying on a power-hungry Wi-Fi receiver, REHSense leverages an RF energy harvester as the sensor and utilizes the voltage signals harvested from the ambient Wi-Fi signals to enable simultaneous context sensing and energy harvesting. We design and implement REHSense using a commercial-off-the-shelf (COTS) RF energy harvester. Extensive evaluation of three fine-grained wireless sensing tasks (i.e., respiration monitoring, human activity recognition, and hand gesture recognition) shows that REHSense can achieve comparable sensing accuracy with conventional Wi-Fi-based solutions while adapting to different sensing environments, reducing the power consumption of sensing by 98.7% and harvesting up to 4.5 mW of power from RF energy.

A Deadline-Aware Scheduler for Smart Factory using WiFi 6
Paper

Mohit Jain, Anis Mishra (IIIT Delhi); Andreas Wiese (Technical University of Munich); Syamantak Das, Arani Bhattacharya, Mukulika Maity (IIIT-Delhi)

Abstract: Smart factories have data packets with a mix of stringent and non-stringent deadlines with varying levels of importance that need to be delivered via a wireless network. However, the scheduling of packets in the wireless network is crucial to satisfy the deadlines. In this work, we propose a technique of utilizing IEEE 802.11ax, popularly known as WiFi 6, for such applications. IEEE 802.11ax has a few unique characteristics, such as specific configurations of dividing the channels into resource units (RU) for packet transmission and synchronized parallel transmissions. We model the problem of scheduling packets by assigning profit to each packet and then maximizing the sum of profits. We first show that this problem is strongly NP-Hard, and then propose an approximation algorithm with a 12-approximate algorithm. Our approximation algorithm uses a variant of local search to associate the right RU configuration to each packet and identify the duration of each parallel transmission. Finally, we extensively simulate different scenarios to show that our algorithm works better than other benchmarks.

18.00 - 19.30     Acropolis museum guided tour (https://www.theacropolismuseum.gr/en/)

20:00 - 23:00     Banquet dinner (Le Grand Balcon Restaurant at St. George Lycabettus Lifestyle Hotel https://www.sgl.gr/taste)


Thursday, Oct 17, 2024

08:00 - 17:00    Registration
08:30 - 10:00    Session 7: Machine Learning-Based Systems

Session Chair: Bing Luo (Duke Kunshan University)

Active Learning for WBAN-based Health Monitoring
Paper

Cho-Chun Chiu, Tuan Nguyen, Ting He (Penn State University); Shiqiang Wang (IBM Research); Beom-Su Kim, KI IL KIM (Chungnam National University)

Abstract: We consider a novel active learning problem motivated by the need of learning machine learning models for health monitoring in wireless body area network (WBAN). Due to the limited resources at body sensors, collecting each unlabeled sample in WBAN incurs a nontrivial cost. Moreover, training health monitoring models typically requires labels indicating the patient's health state that need to be generated by healthcare professionals, which cannot be obtained at the same pace as data collection. These challenges make our problem fundamentally different from classical active learning, where unlabeled samples are free and labels can be queried in real time. To handle these challenges, we propose a two-phased active learning method, consisting of an online phase where a coreset construction algorithm is proposed to select a subset of unlabeled samples based on their noisy predictions, and an offline phase where the selected samples are labeled to train the target model. The samples selected by our algorithm are proved to yield a guaranteed error in approximating the full dataset in evaluating the loss function. Our evaluation based on real health monitoring data and our own experimentation demonstrates that our solution can drastically save the data curation cost without sacrificing the quality of the target model.

FreeGait: Liberalizing Wireless-Based Gait Recognition to Mitigate Non-Gait Human Behaviors
Paper

Dawei Yan (University of Science and Technology of China); Panlong Yang (Nanjing University of Information Science and Technology); Fei Shang, Feiyu Han, Yubo Yan, Xiang-Yang Li (University of Science and Technology of China)

Abstract: Recently, WiFi-based gait recognition technologies have been widely studied. However, most of them work on a strong assumption that users need to walk continuously and periodically under a constant body posture. Thus, a significant challenge arises when users engage in non-periodic or discontinuous behaviors (e.g., stopping and going, turning around during walking). This is because variations of non-gait behaviors interfere with the extraction of gait-related features, resulting in recognition performance degradation. To solve this problem, we propose freeGait, which aims to mitigate the user's non-gait behaviors of WiFi-based gait recognition system. Specifically, we model this problem as domain adaptation, by learning domain-independent representations to extract behavior-independent gait features. We consider human behaviors with labels of users as source domains, and human behaviors without labels of users as target domains. However, directly applying domain adaptation to our specific problem is challenging, because the classification boundaries of the unknown target domains are unclear for WiFi signals. We align the posterior distributions of the source and target domains, and constrain the conditional distribution of the target domains to optimize the gait classification accuracy. To obtain enough source domains data, we build a data augmentation module to generate data similar to the labeled data, and use supervised learning to make the data different between users. We conduct experiments with 20 people and 3 different scenarios, and the results show that accurate predictions of a total of 15 domains data can be achieved by only collecting and labeling a small amount of data from 6 source domains, and user classification accuracy can be improved by up to 45% compared to other existing techniques.

Hermes: Boosting the Performance of Machine-Learning-Based Intrusion Detection System through Geometric Feature Learning
Paper

Chaoyu Zhang, Shanghao Shi (Virginia Tech); Ning Wang (University of South Florida); Xiangxiang Xu (Massachusetts Institute of Technology); Shaoyu Li (Virginia Tech); Lizhong Zheng (Massachusetts Institute of Technology); Randy Marchany, Mark Gardner, Y. Thomas Hou, Wenjing Lou (Virginia Tech)

Abstract: Anomaly-Based Intrusion Detection Systems (IDSs) have been extensively researched for their ability to detect zero-day attacks. These systems establish a baseline of normal behavior using benign traffic data and flag deviations from this norm as potential threats. They generally experience higher false alarm rates than signature-based IDSs. Unlike image data, where the observed features provide immediate utility, raw network traffic necessitates additional processing for effective detection. It is challenging to learn useful patterns directly from raw traffic data or simple traffic statistics (e.g., connection duration, package inter-arrival time) as the complex relationships are difficult to distinguish. Therefore, some feature engineering becomes imperative to extract and transform raw data into new feature representations that can directly improve the detection capability and reduce the false positive rate. We propose a geometric feature learning method to optimize the feature extraction process. We employ contrastive feature learning to learn a feature space where normal traffic instances reside in a compact cluster. We further utilize H-Score feature learning to maximize the compactness of the cluster representing the normal behavior, enhancing the subsequent anomaly detection performance. Our evaluations using the NSL-KDD and N-BaloT datasets demonstrate that the proposed IDS powered by feature learning can consistently outperform state-of-the-art anomaly-based IDS methods by significantly lowering the false positive rate. Furthermore, we deploy the proposed IDS on a Raspberry Pi 4 and demonstrate its applicability on resource-constrained Internet of Things (IoT) devices, highlighting its versatility for diverse application scenarios.

In-Sensor Machine Learning: Radio Frequency Neural Networks for Wireless Sensing
Paper

Jingyu Tong, Zhenlin An, Xiaopeng Zhao, Sicong Liao, Lei Yang (The Hong Kong Polytechnic University)

Abstract: Growing interest in wireless sensing, a cornerstone of the Artificial Intelligence of Things (AIoT), stems from its ability to gauge target states through nearby wireless signals. However, the escalating count of AIoT nodes escalates redundant data flow and exacerbates energy usage in AI cloud infrastructures. This amplifies the urgency for machine learning techniques that function in proximity to, or directly within, sensors. In light of this, we present the Radio-Frequency Neural Network (RFNN), a novel architecture that uses cost-effective transmissive intelligent surfaces to mimic the functions of a traditional neural network near (or in) sensors, transforming sensory nodes into intelligent terminals primed for machine learning. We first devised a unique training algorithm to mitigate the issues arising from unmodelable error-backward propagation; secondly, we incorporated contrastive learning to address the issue of blind labels stemming from environmental uncertainties. Our RFNN prototype, resonating at a 5 GHz WiFi bandwidth, has been honed across nine varied sensing tasks. The rigorous evaluation shows that it achieves a mean accuracy of 91.5% while consuming only 67.2 μJ of energy. This positions RFNN as a match in inferencing prowess to its electronic neural network counterparts but with significantly diminished energy demands.

10:00 - 10:30    Coffee Break
10:30 - 12:00    Session 8: Online Learning and Optimization

Session Chair: Tao Ni (City University of Hong Kong)

Optimistic Joint Flow Control and Link Scheduling with Unknown Utility Functions
Paper

Xin Liu (ShanghaiTech University); Honghao Wei (Washington State University); Lei Ying (University of Michigan, Ann Arbor)

Abstract: This paper proposes new joint flow control and link scheduling (JFCLS) algorithms for the classical network utility maximization (NUM) problem with unknown utility functions. Our algorithm leverages the idea of optimism, i.e., being optimistic in using the historical information to predict the future impact of flow rate and link scheduling decisions, to reduce the oscillation in the flow rate and link scheduling decision. The optimistic design leads to a gradient-type update for flow rate control. We prove that optimistic JFCLS with the gradient information of utility functions establishes a zero optimal utility gap with O(1/T) convergence rate while guaranteeing a constant queue length at each node for any time slot. When only the values of utility functions are observed, we propose zero-order optimistic JFCLS and prove it establishes a trade-off with utility gap O(1/T¼) and O(T¾) queue length. Our experiments demonstrate the proposed optimistic algorithm achieves a fast convergence rate and is very adaptive to network dynamics, such as flow dynamics or link failure.

Regret Bounds for Online Learning for Hierarchical Inference
Paper

Ghina Al Atat (IMDEA Networks Institute); Puranjay Datta, Sharayu Moharir (IIT Bombay); Jaya Prakash Champati (IMDEA Networks Institute)

Abstract: Hierarchical Inference (HI) has emerged as a promising approach for efficient distributed inference between end devices deployed with small pre-trained Deep Learning (DL) models and edge/cloud servers running large DL models. Under HI, a device uses the local DL model to perform inference on the data samples it collects, and only the data samples on which this inference is likely to be incorrect are offloaded to a remote DL model running on the server. Thus, gauging the likelihood of incorrect local inference is key to implementing HI. A natural approach is to compute a confidence metric for the local DL inference and then use a threshold on this confidence metric to determine whether to offload or not. Recently, the HI online learning problem was studied to learn an optimal threshold for the confidence metric over a sequence of data samples collected over time. However, existing algorithms have computation complexity that grows with the number of rounds and do not exhibit a sub-linear regret bound. In this work, we propose the Hedge-HI algorithm and prove that it has [EQUATION] regret, where T is the number of rounds, and NT is the number of distinct confidence metric values observed till round T. Further, under a mild assumption, we propose Hedge-HI-Restart, which has an [EQUATION] regret bound with high probability and has a much lower computation complexity that grows sub-linearly in the number of rounds. Using runtime measurements on Raspberry Pi, we demonstrate that Hedge-HI-Restart has a runtime lower by order of magnitude and achieves cumulative loss close to that of the alternatives.

Online Learning in Blockchain-Based Energy Trading Systems
Paper

Yunshu Liu (Singapore University of Technology and Design); Man Hon Cheung (City University of Hong Kong); Jianwei Huang (The Chinese University of Hong Kong, Shenzhen)

Abstract: In this paper, we consider a blockchain-based energy trading (BBET) system with the proof-of-stake (PoS) protocol. The system designer aims to minimize system cost by considering the prosumers' strategic token allocation between blockchain staking and energy purchase for their applications. This is challenging as the system designer does not know prosumers' private information of impatience levels towards different applications. To this end, we propose an online learning mechanism (OLM), which includes incentive mechanisms to guide both prosumers' private information reporting and staking decisions in two phases. In the exploration phase, we design a randomized staking reward to encourage prosumers' truthful reporting of their private information for the learning of impatience level distributions. Based on the threshold structure of the prosumers' equilibrium staking strategies, in the exploitation phase, we propose a learning-error-based reward to minimize the system cost considering the finite-sample bias. By characterizing the optimal exploration duration, we prove that OLM achieves an asymptotic zero-regret against the complete information benchmark, with the regret bounded by [EQUATION] when operating for T time slots. We implement the corresponding smart contract in Ethereum to demonstrate the feasibility of our approach. Experiment results show that our mechanism reduces regret by an average of 74% compared to the state-of-art mechanism.

Intervention-Assisted Online Deep Reinforcement Learning for Stochastic Queuing Network Optimization
Paper

Jerrod Wigmore (MIT); Brooke Shrader (MIT Lincoln Lab); Eytan Modiano (MIT)

Abstract: Deep Reinforcement Learning (DRL) offers a powerful approach to training neural network control policies for stochastic queuing networks (SQN). However, traditional DRL methods rely on offline simulations or static datasets, limiting their real-world application in SQN control. This work proposes Online Deep Reinforcement Learning-based Controls (ODRLC) as an alternative, where an intelligent agent interacts directly with a real environment and learns an optimal control policy from these online interactions. SQNs present a challenge for ODRLC due to the unbounded nature of the queues within the network resulting in an unbounded state-space. An unbounded state-space is particularly challenging for neural network policies as neural networks are notoriously poor at extrapolating to unseen states. To address this challenge, we propose an intervention-assisted framework that leverages strategic interventions from known stable policies to ensure the queue sizes remain bounded. This framework combines the learning power of neural networks with the guaranteed stability of classical control policies for SQNs. We introduce a method to design these intervention-assisted policies to ensure strong stability of the network. Furthermore, we extend foundational DRL theorems for intervention-assisted policies and develop two practical algorithms specifically for ODRLC of SQNs. Finally, we demonstrate through experiments that our proposed algorithms outperform both classical control approaches and prior ODRLC algorithms.

12:00 - 13:30    Lunch
13:30- 15:00    Session 9: Age of Information

Session Chair: Jianwei Huang (The Chinese University of Hong Kong, Shenzhen)

On The Active-Time Condition for Partial Indexability and Application to Heterogeneous-Channel AoI Minimization
Paper

Sixiang Zhou (Purdue University, West Lafayette); Xiaojun Lin (The Chinese University of Hong Kong and Purdue University (on leave))

Abstract: Motivated by an Age-of-Information (AoI) minimization problem for systems with both fast (but unreliable) and slow (but more reliable) channels, in this paper we are interested in MDPs (Markov Decision Processes) where multiple agents compete for multiple heterogeneous channels. Such an MDP is known to suffer the curse-of-dimensionality when the number of sources is large. Recently, a partial index approach, which generalizes the Whittle index for single-channel systems, has been proposed to decompose such a large-scale MDP into smaller sub-problems, if each sub-problem satisfies the partial indexability and the precise division property. Unfortunately, verifying partial indexability and the precise division property is highly non-trivial. This paper provides a new Active-Time (AT) condition, which ensures the precise-division property (which then implies the partial indexability). Our new AT condition generalizes the AT condition for Whittle indexability in a non-trivial manner, and is much easier to verify for systems with heterogeneous channels. We then apply our AT condition to the fast-slow-channel setting, and establish its partial indexability for the first time in the literature. Our analysis reveals new semi-threshold structures of the optimal policy, and uses a new coupling approach to analyze the corresponding AT condition, which could also be of independent interest. Numerical simulations are provided to verify our theoretical results and to demonstrate the close-to-optimal performance of the partial index policy.

Age of Information in Mobile Networks: Fundamental Limits and Tradeoffs
Paper

Meng Zhang, Howard H. Yang (Zhejiang University); Ahmed Arafa (University of North Carolina at Charlotte); H. Vincent Poor (Princeton University)

Abstract: Age of information (AoI), defined for an information source as the time elapsed since the latest received update was generated, is a recently proposed metric that quantifies the timeliness of information delivery in a communication system. This paper studies a fundamental problem of how the achievable AoI scales in mobile networks. Specifically, we consider a network consisting of n/2 source-destination (S-D) pairs and employ the protocol model to characterize interference incurred by concurrent transmissions. We consider a general class of scheduling policies potentially with the multi-hop transmission and the duplication of packets to multiple nodes. The analysis of AoI faces significant challenges due to potential out-of-order packet delivery, the inherent tradeoffs between packet-centric metrics (throughput and delay), and their unexplored relation to AoI. We first show that the average per-node AoI in static settings scales as [EQUATION]. In the case of networks with i.i.d. mobility, where the node locations vary independently over time, we introduce an episodic technique that allows us to establish lower bounds and design and analyze scheduling policies as constructive upper bounds. Our analytical results reveal that the average per-node AoI scales as [EQUATION] under i.i.d. mobility, which highlights that mobility can enhance timeliness. Finally, we show that, in a more general class of wireless network settings, one can design the age-minimal scheduling policy by balancing throughput and delay.

Age of Information Optimization and State Error Analysis for Correlated Multi-Process Multi-Sensor Systems
Paper

Egemen Erbayat (George Washington University); Ali Maatouk (Yale University); Peng Zou, Suresh Subramaniam (George Washington University)

Abstract: In this paper, we examine a multi-sensor system where each sensor may monitor more than one time-varying information process and send status updates to a remote monitor over a common channel. We consider that each sensor's status update may contain information about more than one information process in the system subject to the system's constraints. To investigate the impact of this correlation on the overall system's performance, we conduct an analysis of both the average Age of Information (AoI) and source state estimation error at the monitor. Building upon this analysis, we subsequently explore the impact of the packet arrivals, correlation probabilities, and rate of processes' state change on the system's performance. Next, we consider the case where sensors have limited sensing abilities and distribute a portion of their sensing abilities across the different processes. We optimize this distribution to minimize the total AoI of the system. Interestingly, we show that monitoring multiple processes from a single source may not always be beneficial. Our results also reveal that the optimal sensing distribution for diverse arrival rates may exhibit a rapid regime switch, rather than smooth transitions, after crossing critical system values. This highlights the importance of identifying these critical thresholds to ensure effective system performance.

15:00 - 15:30    Coffee Break
15:30 - 17:00    Session 10: Measurement and Monitoring Systems

Session Chair: Mohamed Ibrahim (Hewlett Packard Labs)

Abstract: A critical challenge in deploying UAV-based wireless networks is optimizing the UAV's position in aerial space to ensure robust connectivity. To achieve optimal positioning, the UAV must estimate the propagation loss for each ground terminal across the aerial space, also called Radio Environment Map (REM). Existing literature in this domain emphasizes accuracy enhancement of the REM estimations from sparse and noisy signal measurements - often leveraging deep learning techniques. However, these estimated REMs rapidly become outdated as UEs move, requiring repeated measurements and re-estimation. This introduces significant scalability challenges, especially in environments characterized by high UE mobility. In this paper, we propose SkyScale, a first of its kind aerial network framework, that uses Radio Tomographic Imaging (RTI) to estimate the wireless attenuation characteristics of the underlying terrain. Such characteristics are fundamental properties of the terrain and are agnostic to UE locations, thus allowing us to estimate REMs for any arbitrary UE location. We evaluate SkyScale on our WiFi based UAV network testbed, traces from an LTE testbed, and realistic simulation studies. We demonstrate that our RTI-based SkyScale maintains REM accuracy within 3 dB while reducing measurement costs by 10× or more compared to state-of-the-art interpolation methods, making it highly scalable and adaptive to dynamic network conditions.

Multi-Task-Oriented UAV Crowd Sensing with Charging Budget Constraint
Paper

Jiahui Sun, Guiyun Fan, Haiming Jin (Shanghai Jiao Tong University); Yiwen Song (Carnegie Mellon University); Tianyuan Liu, Chenhao Ying, Yuan Luo, Jie Li (Shanghai Jiao Tong University)

Abstract: Nowadays, unmanned aerial vehicles (UAVs) are widely applied in crowd sensing. For UAV-enabled crowd sensing (UAVCS) systems, the sensing outcome and charging cost are two primary concerns. To achieve a satisfactory sensing outcome under the charging budget, we exploit joint moving, sensing, and charging scheduling of UAVs, as they all have critical impacts on such two objectives. However, the dynamically generated sensing targets and the variety of sensing tasks a UAVCS system may face make farsighted scheduling of UAVs rather challenging. To this end, we propose a novel multi-task constrained multi-agent reinforcement learning (MARL) method to help UAVs make distributed moving, sensing, and charging decisions. Specifically, we design a multi-task MARL framework to learn a single generic policy for a large collection of tasks, and propose a primal-dual training algorithm that alternates between improving the overall sensing outcome and reducing each task's constraint violation. Theoretically, we show that our algorithm provably converges, and analyze the optimality gap and constraint violation of the trained policy on unseen tasks. Extensive experiments on an incident dataset in New York City demonstrate that our method outperforms strong baselines in sensing outcome maximization and budget satisfaction, and also generalize well to unseen tasks.

SONDAR: Size and Shape Measurements Using Acoustic Imaging
Paper

Xiaoxuan Liang, Zhaolong Wei, Dong Li, Jie Xiong, Jeremy Gummeson (University of Massachusetts Amherst)

Abstract: Acoustic signal has been used to sense important contextual information of targets such as location and movement. This paper explores its potential for measuring the geometric information of moving targets, i.e., shape and size. We propose SONDAR, a novel shape and size measurement system using Inverse Synthetic Aperture Radar (ISAR) imaging on commodity devices. We first design a Doppler-free echo alignment method to accurately align target reflections even if there exists a severe Doppler effect. Then we down-convert the received signals reflected from multiple scatter points on the target and construct a modulated downchirp signal to generate the image. We further develop a lightweight approach to extract the geometric information from a 2-D frequency image. We implement and evaluate a proof-of-concept system on both a Bela platform and a smartphone. Extensive experiments show that we can correctly estimate the target shape and achieve a millimeterlevel size measurement accuracy. Our system can achieve high accuracies even when the target moves along a deviated trajectory, at a relatively high speed, and under obstruction.