Monday, Oct. 23 Tuesday, Oct. 24 Wednesday, Oct. 25 Thursday, Oct. 26
Time (EDT)WorkshopsMain, Day 1Main, Day 2Main, Day 3
Location GWU Student Center GWU Science & Engineering Hall (SEH)
8:00 AMRegistrationRegistrationRegistrationRegistration
8:30 AMWorkshop SessionsOpening Remarks
9:00 AMWorkshop SessionsKeynote 1Keynote 2Special Session
10:00 AMCoffee BreakCoffee BreakCoffee BreakCoffee Break
10:30 AMWorkshop SessionsSession 1Session 3Session 6
12:00 PMLunchLunch (N2Women Event)LunchLunch
1:30 PMWorkshop SessionsSession 2Session 4Session 7
3:00 PMCoffee BreakCoffee BreakCoffee BreakCoffee Break
3:30 PMWorkshop SessionsPoster/Demo SessionSession 5Session 8
6:00 PMWelcome ReceptionBanquet (City Cruises)

  • The main conference will be held at the Science & Engineering Hall (SEH) at The George Washington University. All the main conference events including technical sessions will be held at the Lehman Auditorium at SEH 1220, except for Posters (open area right outside SEH 1220), Demos (SEH 1270), N2Women Event (SEH 2990), and Banquet (City Cruises Washington DC - Pier 4 at 6th and Water Streets, S.W.).
  • All the workshops will be held at the George Washington University Student Center.

Monday, Oct 23, 2023

8:00 AM    Registration
    Workshop-ReUNS (Student Center, Room 302 and Room 307)
    Workshop-AIoT (Student Center, Room 402)
    Workshop-6G-PDN (Student Center, Room 404)
    Workshop-IoST-5G&B (Student Center, Room 402)

Tuesday, Oct 24, 2023

08:00 - 08:30 AM     Registration
08:30 - 09:00 AM     Opening Remarks

Abstract: Over the past fifteen years, my research interests have gradually transitioned from wireless networks to distributed algorithms, with a recent emphasis on distributed optimization/learning. In this talk, I will discuss how I ended up making this transition between apparently disconnected and disparate areas, and provide an overview of some of the new problems (and solutions) we were able to define along the way, including resilient consensus and fault-tolerant optimization.

10:00 - 10:30 AM     Coffee Break
10:30 AM - 12:00 PM     Session 1: Federated Learning
Session Chair: Ting He (Penn State University)
Cache-Enabled Federated Learning Systems

Yuezhou Liu (Northeastern University), Lili Su (Northeastern University), Carlee Joe-Wong (Carnegie Mellon University), Stratis Ioannidis (Northeastern University), Edmund Yeh (Northeastern University), Marie Siew (Carnegie Mellon University)

Abstract: Federated learning (FL) is a distributed paradigm for collaboratively learning models without having clients disclose their private data. One natural and practically relevant metric to measure the efficiency of FL algorithms is the total wall-clock training time, which can be quantified by the product of the average time needed for a single iteration and the number of iterations for convergence. In this work, we focus on improving FL efficiency with respect to this metric through \emph{caching}. Specifically, instead of having all clients download the latest global model from a parameter server, we select a subset of clients to access a somewhat stale global model stored in caches with less delay. We propose \emph{CacheFL} -- a cache-enabled variant of FedAvg, and provide theoretical convergence guarantees in the general setting where the local data is imbalanced and heterogeneous. Armed with this result, we determine the caching strategies that minimize total wall-clock training time to a given convergence threshold for both stochastic and deterministic communication/computation delays. Through numerical experiments on real data traces, we show the advantage of our proposed scheme against several baselines, over both synthetic and real-world datasets.

Incentive Mechanism Design for Federated Learning and Unlearning

Ningning Ding (Northwestern University), Zhenyu Sun (Northwestern University), Ermin Wei (Northwestern University), Randall Berry (Northwestern University)

Abstract: To protect users \emph{right to be forgotten} in federated learning, federated unlearning aims at eliminating the impact of leaving users' data on the global learned model. The current research in federated unlearning mainly concentrated on developing effective and efficient unlearning techniques. However, the issue of incentivizing valuable users to remain engaged and preventing their data from being unlearned is still under-explored, yet important to the unlearned model performance. This paper focuses on the incentive issue and develops an incentive mechanism for federated learning and unlearning. We first characterize the leaving users' impact on the global model accuracy and the required communication rounds for unlearning. Building on these results, we propose a four-stage game to capture the interaction and information updates during the learning and unlearning process. A key contribution is to summarize users multi-dimensional private information into one-dimensional metrics to guide the incentive design. We show that users who incur high costs and experience significant training losses are more likely to discontinue their engagement through federated unlearning. The server tends to retain users who make substantial contributions to the model but has a trade-off on users' training losses, as large training losses of retained users increase privacy costs but decrease unlearning costs. The numerical results demonstrate the necessity of unlearning incentives for retaining valuable leaving users, and also show that our proposed mechanisms decrease the server’s cost by up to 53.91\% compared to state-of-the-art benchmarks.

Anarchic Federated learning with Delayed Gradient Averaging

Dongsheng Li (Auburn University), Xiaowen Gong (Auburn University)

Abstract: The rapid advances in federated learning (FL) in the past few years have recently inspired a great deal of research on this emerging topic. Existing work on FL often assume that clients participate in the learning process with some particular pattern (such as balanced participation), and/or in a synchronous manner, and/or with the same number of local iterations, while these assumptions can be hard to hold in practice. In this paper, we propose AFL-DGA, an Anarchic Federated Learning algorithm with Delayed Gradient Averaging, which gives maximum freedom to clients. In particular, AFL-DGA allows clients to 1) participate in any rounds; 2) participate asynchronously; 3) participate with any number of local iterations; 4) perform gradient computations and gradient communications in parallel. The proposed AFL-DGA algorithm enables clients to participate in FL flexibly according to their their heterogeneous and time-varying computation and communication capabilities, and also efficiently by improving utilization of their computation and communication resources. We characterize performance bounds on the learning loss of AFL-DGA as a function of clients' local iteration numbers, local computation delays, and global gradient delays. Our results show that the AFL-DGA algorithm can achieve a convergence rate of $O(\frac{1}{\sqrt{NT}})$ and also a linear convergence speedup, which matches that of existing benchmarks. The results also characterize the impacts of clients' various parameters on the learning loss, which provide useful insights. Numerical results demonstrate the efficiency of the proposed algorithm.

Connectivity-Aware Semi-Decentralized Federated Learning over Time-Varying D2D Networks

Rohit Parasnis (Purdue University), Seyyedali Hosseinalipour (University at Buffalo (SUNY)), Yun-Wei Chu (Purdue University), Mung Chiang (Princeton University/Purdue University), Christopher G. Brinton (Purdue University)

Abstract: Semi-decentralized federated learning blends the conventional device-to-server (D2S) interaction structure of federated model training with localized device-to-device (D2D) communications. We study this architecture over practical edge networks with multiple D2D clusters modeled as time-varying and directed communication graphs. Our investigation results in an algorithm that controls the fundamental trade-off between (a) the rate of convergence of the model training process towards the global optimizer, and (b) the number of D2S transmissions required for global aggregation. Specifically, in our semi-decentralized methodology, D2D consensus updates are injected into the federated averaging framework based on column-stochastic weight matrices that encapsulate the connectivity within the clusters. To arrive at our algorithm, we show how the expected optimality gap in the current global model depends on the greatest two singular values of the weighted adjacency matrices (and hence on the densities) of the D2D clusters. We then derive tight bounds on these singular values in terms of the node degrees of the D2D clusters, and we use the resulting expressions to design a threshold on the number of clients required to participate in any given global aggregation round so as to ensure a desired convergence rate. Simulations performed on real-world datasets reveal that our connectivity-aware algorithm reduces the total communication cost required to reach a target accuracy significantly compared with baselines depending on the connectivity structure and the learning task.

12:00 - 1:30 PM     Lunch (N2Women Event)
1:30 - 3:00 PM     Session 2: Wireless Systems and Experimentation
Session Chair: Zhichao Cao (Michigan State University)
2ACE: Spectral Profile-driven Multi-resolutional Compressive Sensing for mmWave Channel Estimation

Yiwen Song (Carnegie Mellon University), Changhan Ge (The University of Texas at Austin), Lili Qiu (University of Texas at Austin), Yin Zhang

Abstract: Channel estimation is critical to millimeter-wave capability. Unlike sub-6 GHz WiFi, commercial-off-the-shelf 60 GHz WiFi devices adopt single RF-chain and can only report the combined received signal strength (RSS) instead of the antenna-wise channel state information (CSI). Therefore, recovering the CSI using a limited number of RSS measurements is important but faces the following challenges: (i) solving a non-convex objective is hard and computationally heavy, (ii) the estimation error is high with insufficient RSS measurements, and (iii) channel fluctuates dynamically. To jointly tackle them, we propose 2ACE, an Accelerated and Accurate Channel Estimation approach using spectral profile-driven multiresolutional compressive sensing. Our thorough experiments show that 2ACE yields 2-8 dB reduction in CSI estimation error, 1-5 dB improvement in beamforming performance, and 5-10 degrees reduction in angle-of-departure estimation error over the existing schemes.

kaNSaaS: Combining Deep Learning and Optimization for Practical Overbooking of Network Slices

Sergi Alcalá-Marín (IMDEA Networks and University Carlos III of Madrid), Antonio Bazco-Nogueras (IMDEA Networks Institute), Albert Banchs (IMDEA Networks and University Carlos III of Madrid), Marco Fiore (IMDEA Networks Institute)

Abstract: Cloud-native mobile networks pave the road for long-envisioned Network Slicing as a Service (NSaaS) paradigms. Slice overbooking is a promising strategy for the management of NSaaS, which maximizes the revenues from admitted slices by exploiting the fact they are unlikely to fully utilize their reserved resources concurrently. While seminal works have suggested the significant potential of overbooking for NSaaS in simplistic scenarios, its realization is challenging in practical scenarios with realistic slice demands, where the performance of the paradigm remains to be tested. In this paper, we propose kaNSaaS, a complete solution for NSaaS management with slice overbooking that combines deep learning and classical optimization to jointly solve the key tasks of admission control and resource allocation. Experiments with large-scale measurement data of actual tenant demands show the effectiveness of our solution, and provide a first comprehensive assessment of the economic advantages the overbooking can bring in production networks. In such settings, kaNSaaS increases the network infrastructure operator profits by 300% with respect to NSaaS management strategies that do not employ overbooking, while outperforming by more than 20% state-of-the-art overbooking-based approaches.

A New Paradigm of Communication-Aware Collaborative Positioning for FutureG Wireless Systems

Yu-Tai Lin (Georgia Institute of Technology), Karthikeyan Sundaresan (Georgia Institute of Technology)

Abstract: Future generation wireless systems are rapidly evolving to support positioning as a native feature using their communication infrastructure. Notwithstanding the dependence on a densely deployed infrastructure (for accurate positioning). We highlight the significant degradation due to the overhead of such an infrastructure-based positioning (IP) approach can bring communication performance warranting timely attention. We propose a fundamentally different paradigm of communication-aware collaborative positioning CO2P, whereby the burden of positioning is offloaded to client devices in an intelligent, communication-aware and collaborative (peer-peer) manner that reduces overhead and improves spatial reuse to preserve communication performance, without compromising on positioning accuracy. Through technically-sound algorithms CO2P addresses the underlying tradeoff between communication performance and positioning accuracy to deliver an efficient coexistence, providing a two-fold increase in both throughput and accuracy over conventional IP approaches. CO2P is also orchestrated as a practical, distributed, adoption-friendly solution that is realized using WiFi’s positioning protocol.

AirFC: Designing Fully Connected Layers for Neural Networks with Wireless Signals

Guillem Reus-Muns (Northeastern University), Kubra Alemdar (Northeastern University), Sara Garcia Sanchez (Northeastern University), Debashri Roy (Northeastern University), Kaushik R. Chowdhury (Northeastern University)

Abstract: This paper proposes and experimentally validates a new paradigm for computing with wireless signals over-the-air (OTA). It demonstrates the first fully connected (FC) neural network (NN) constructed entirely using channel propagation and signal interference principles. Our design is based on architecting the desired linear operation of an FC layer through the superposition of signals emitted from multiple transmitters and received at a single receiver, similar to multiple input single output (MISO) systems. Our design takes into account several practical considerations, such as the impact of multiple subcarriers, the number of transmit antennas, and the changing wireless channel. The key outcome of our work is developing a principled methodology that transforms a given trained digital FC NN into its OTA equivalent. This novel computational paradigm, which we call AirFC, allows us to run NN tasks without compute-specific hardware during tests. We validate our design using 9 time-synchronized software-defined radios (SDRs) available on the ORBIT testbed, emulating a 16 antenna array. We use the MNIST dataset as input to our wireless FC NN and demonstrate classification with 92.61\% accuracy, which proves that our NN with OTA FC layers performs similar to the conventional, all-digital version with an accuracy decrease of only 0.73\%.

3:00 - 3:30 PM     Coffee Break
3:30 - 5:30 PM     Poster/Demo Session
Poster: A Novel Region-of-Interest Based UAV Planning Strategy for Mitigating Urban Peak Demand

Ruide Cao (SUSTech Institute of Future Networks; DET of Heyuan), Jiao Ye (SZU School of Architecture and Urban Planning), Qian You (SUSTech Institute of Future Networks), Jianghan Xu (SUSTech Institute of Future Networks), Yi Wang (SUSTech Institute of Future Networks; Peng Cheng Laboratory; DET of Heyuan), Shiyu Jiang (Wuhan Maritime Communications Research Institute), Yaomin Li (Wuhan Maritime Communications Research Institute)

Poster: Extracting Speech from Subtle Room Object Vibrations Using Remote mmWave Sensing

Cong Shi (New Jersey Institute of Technology), Tianfang Zhang (Rutgers University), Zhaoyi Xu (Rutgers University), Shuping Li (Rutgers University), Donglin Gao (Rutgers University), Changming Li (Rutgers University), Athina Petropulu (Rutgers University), Chung-Tse Michael Wu (Rutgers University), Yingying Chen (Rutgers University))

Poster: Unobtrusively Mining Vital Sign and Embedded Sensitive Info via AR/VR Motion Sensors

Tianfang Zhang (Rutgers University), Zhengkun Ye (Temple University), Ahmed Tanvir Mahdad (Texas A&M University, College Station), Md Mojibur Rahman Redoy Akanda (Texas A&M University, College Station), Cong Shi (New Jersey Institute of Technology), Nitesh Saxena (Texas A&M University, College Station), Yan Wang (Temple University), Yingying Chen (Rutgers University)

Poster: Accordion: Toward a Limited Contention Protocol for Wi-Fi 6 Scheduling

Shyam Krishnan Venkateswaran (Georgia Institute of Technology), Ching-Lun Tai (Georgia Institute of Technology), Raghupathy Sivakumar (Georgia Institute of Technology)

Poster: Timestamp Verifiability in Proof-of-Work

Tzuo Hann Law (Unaffiliated), Selman Erol (Carnegie Mellon University), Lewis Tseng (Clark University)

Demo: Immersive Remote Monitoring and Control for Internet of Things

Xiaoyi Wu (The Pennsylvania State University), Jiangong Chen (The Pennsylvania State University), Rui Tang (The Pennsylvania State University), Kefan Wu (University of Pennsylvania), Bin Li (The Pennsylvania State University)

Demo: Meta2Locate: Meta Surface Enabled Indoor Localization in Dynamic Environments

Qinpei Luo (Peking University), Ziang Yang (Peking University), Boya Di (Peking University), Chenren Xu (Peking University)

Demo: A Prototype for Detecting and Localizing Hidden Devices in Unfamiliar Environments

Xiangyu Ju (National University of Defense Technology Hunan, China), Biao Han (National University of Defense Technology Hunan, China), Yitang Chen (National University of Defense Technology Hunan, China), Jinrong Li (National University of Defense Technology Hunan, China)

6:00 PM     Welcome Reception

Wednesday, Oct 25, 2023

08:00 - 9:00 AM     Registration

Abstract: In this talk, I will first go over the timeline of 3GPP standardization of 5G advanced, which paves the road towards 6G. I will highlight the main features of NR Release 18 that sets off the 5G advanced evolution. I will next discuss the key market trends and technology drivers of 6G with the emphasis that 6G will be a smart platform of new capabilities beyond communications. I will then discuss key longer-term research vectors. Finally, I will point out early regional efforts of 6G collaboration of government, industry, and academia.

10:00 - 10:30 AM     Coffee Break
10:30 AM - 12:00 PM     Session 3: Overcoming Delays and Age of Information
Session Chair: Clement Kam (U.S. Naval Research Laboratory)
Age of Information Diffusion on Social Networks: Optimizing Multi-Stage Seeding Strategies

Songhua Li (Singapore University of Technology and Design), Lingjie Duan (Singapore University of Technology and Design)

Abstract: To promote viral marketing, major social platforms (e.g., Facebook Marketplace and Pinduoduo) repeatedly select and invite different users (as seeds) in online social networks to share fresh information about a product or service with their friends. Thereby, we are motivated to optimize a multi-stage seeding process of viral marketing in social networks, and adopt the recent notions of the peak and the average age of information (AoI) to measure the timeliness of promotion information received by network users. Our problem is different from the literature on information diffusion in social networks, which limits to one-time seeding and overlooks AoI dynamics or information replacement over time. As a critical step, we manage to develop closed-form expressions that characterize and trace AoI dynamics over any social network. For the peak AoI problem, we first prove the NP-hardness of our multi-stage seeding problem by a highly non-straightforward reduction from the dominating set problem, and then present a new polynomial-time algorithm that achieves good approximation guarantees (e.g., less than 2 for linear network topology). For minimizing the average AoI, we also prove that our problem is NP-hard by properly reducing it from the set cover problem. Benefiting from our two-side bound analysis on the average AoI objective, we build up a new framework for approximation analysis and link our problem to a much simplified sum-distance minimization problem. This intriguing connection inspires us to develop another polynomial-time algorithm that achieves a good approximation guarantee. Finally, we run extensive experiments on a real social network to validate our theoretical findings.

A Whittle Index Policy for the Remote Estimation of Multiple Continuous Gauss-Markov Processes over Parallel Channels

Tasmeen Zaman Ornee (Dept. of ECE, Auburn University, Auburn, AL), Yin Sun (Dept. of ECE, Auburn University)

Abstract: In this paper, we study a sampling and transmission scheduling problem for multi-source remote estimation, where a scheduler determines when to take samples from multiple continuous-time Gauss-Markov processes and send the samples over multiple channels to remote estimators. The sample transmission times are i.i.d. across samples and channels. The objective of the scheduler is to minimize the weighted sum of the time-average expected estimation errors of these Gauss-Markov sources. This problem is a continuous-time Restless Multi-armed Bandit (RMAB) problem with a continuous state space. We prove that the bandits are indexable and derive an exact expression of the Whittle index. To the extent of our knowledge, this is the first Whittle index policy for multi-source signal-aware remote estimation of Gauss-Markov processes.We further investigate signal-agnostic remote estimation and develop a Whittle index policy for multi-source Age of Information (AoI) minimization over parallel channels with i.i.d. random transmission times. Our results unite two theoretical frameworks for remote estimation and AoI minimization: threshold-based sampling and Whittle index-based scheduling. In the single-source, single-channel scenario, we demonstrate that the optimal solution to the sampling and scheduling problem can be equivalently expressed as both a threshold-based sampling strategy and a Whittle index-based scheduling policy. Notably, the Whittle index is equal to zero if and only if two conditions are satisfied: (i) the channel is idle, and (ii) the estimation error is precisely equal to the threshold in the threshold-based sampling strategy. Moreover, the methodology employed to derive threshold-based sampling strategies in the single-source, single-channel scenario plays a crucial role in establishing indexability and evaluating the Whittle index in the more intricate multi-source, multi-channel scenario. Our numerical results show that the proposed policy achieves high performance gain over the existing policies when some of the Gauss-Markov processes are highly unstable.

Age Minimization with Energy and Distortion Constraints

Guidan Yao (The Ohio State University), Chih-Chun Wang (Purdue University), Ness Shroff (The Ohio State University)

Abstract: In this paper, we consider a status update system, where an access point collects measurements from multiple sensors that monitor a common physical process, fuses them, and transmits the aggregated sample to the destination over an erasure channel. Under a typical information fusion scheme, the distortion of the fused sample is inversely proportional to the number of measurements received. Our goal is to minimize the long-term average age while satisfying the average energy and general age-based distortion requirements. Specifically, we focus on the setting in which the distortion requirement is stricter when the age of the update is older. We show that the optimal policy is a mixture of two stationary, deterministic, threshold-based policies, each of which is optimal for a parameterized problem that aims to minimize the weighted sum of the age and energy under the distortion constraint. We then derive analytically the associated optimal average age-cost function and characterize its performance in the \emph{large threshold regime}, the results of which shed critical insights on the tradeoff among age, energy, and the distortion of the samples. We have also developed a closed-form solution for the special case when the distortion requirement is independent of the age, arguably the most important setting for practical applications.

On a Continuous-Time Martingale and Two Applications

Sima Mehri (University of Warwick), Florin Ciucu (University of Warwick)

Abstract: We construct a continuous-time martingale to analyze two queueing systems: One is the $GI^\textrm{X}/M/1$ queue with light-tailed batch arrivals for which we obtain the first exact result in closed-form when $X$ has a Geometric distribution, and stochastic bounds otherwise. The second application concerns queues with Semi-Markovian (SM) inter-arrival times; of particular interest is the scenario with multiplexed SM sources whose aggregate loses the SM property. Unlike existing exact but implicit solutions which rely on numerical transform methods, even in the $M^\textrm{r}/M/1$ case, our stochastic bounds are in closed-form and shown to be (mostly) numerically accurate.

12:00 - 1:30 PM     Lunch
1:30- 3:00 PM     Session 4: Network Resource Allocation
Session Chair: Giovanni Schembra (University of Catania)
The Power of Two Choices with Load Comparison Errors

Sanidhay Bhambay (University of Warwick), Arpan Mukhopadhyay (University of Warwick), Thirupathaiah Vasantam (Durham University)

Abstract: We consider a system with $n$ unit-rate servers where jobs arrive according a Poisson process with rate $n\lambda$ ($\lambda <1$). In the standard \textit{Power-of-two} or Po2 scheme, for each incoming job, a job dispatcher samples two servers uniformly at random and sends the incoming job to the least loaded of the two sampled servers. However, in practice, the load information may not be accurate at the job dispatcher. In this paper, we analyze the effects of erroneous load comparisons on the performance of the Po2 scheme. Specifically, we consider {\em load-dependent} and {\em load-independent} errors. In the load-dependent error model, an incoming job is sent to the server with the larger queue length among the two sampled servers with an error probability $\epsilon$ if the difference in the queue lengths of the two sampled servers is less than or equal to a constant $g$; no error is made if the queue-length difference is higher than $g$. For this type of errors, we show that, in the large system limit, the benefits of the Po2 scheme is retained for all values of $g$ and $\epsilon$ as long as the system is heavily loaded, i.e., $\lambda$ is close to $1$. In the load-independent error model, the incoming job is sent to the sampled server with the {\em maximum load} with an error probability of $\epsilon$ independent of the loads of the sampled servers. For this model, we show that the performance benefits of the Po2 scheme are retained only if $\epsilon \leq 1/2$; for $\epsilon > 1/2$ we show that the stability region of the system reduces and the system performs poorly in comparison to the {\em random scheme}. To prove our stability results, we develop a generic approach to bound the drifts of Lyapunov functions for any state-dependent load balancing scheme. Furthermore, the mean-field analysis in our paper uses a new approach to characterise fixed points which do not admit a recursion.

Abstract: Fair resource allocation is one of the most important topics in communication networks. Existing solutions almost exclusively assume each user utility function is known and concave. This paper seeks to answer the following question: how to allocate resources when utility functions are unknown, even to the users? This answer has become increasingly important in the next-generation AI-aware communication networks where the user utilities are complex and their closed-forms are hard to obtain. In this paper, we provide a new solution using a distributed and data-driven bilevel optimization approach, where the lower level is a distributed network utility maximization (NUM) algorithm with concave surrogate utility functions, and the upper level is a data-driven learning algorithm to find the best surrogate utility functions that maximize the sum of true network utility. The proposed algorithm learns from data samples (utility values or gradient values) to autotune the surrogate utility functions to maximize the true network utility, so works for unknown utility functions. For the general network, we establish the nonasymptotic convergence rate of the proposed algorithm with nonconcave utility functions. The simulations validate our theoretical results and demonstrate the great effectiveness of the proposed method in a real-world network.

Optimizing Sectorized Wireless Networks: Model, Analysis, and Algorithm

Panagiotis Promponas (Yale University), Tingjun Chen (Duke University), Leandros Tassiulas (Yale University)

Abstract: Future wireless networks need to support the increasing demands for high data rates and improved coverage. One promising solution is sectorization, where an infrastructure node (e.g., a base station) is equipped with multiple sectors employing directional communication. Although the concept of sectorization is not new, it is critical to fully understand the potential of sectorized networks, such as the rate gain achieved when multiple sectors can be simultaneously activated. In this paper, we focus on sectorized wireless networks, where sectorized infrastructure nodes with beam-steering capabilities form a multi-hop mesh network for data forwarding and routing. We present a sectorized node model and characterize the capacity region of these sectorized networks. We define the flow extension ratio and the corresponding sectorization gain, which quantitatively measure the performance gain introduced by node sectorization as a function of the network flow. Our objective is to find the optimal sectorization of each node that achieves the maximum flow extension ratio, and thus the sectorization gain. Towards this goal, we formulate the corresponding optimization problem and develop an efficient distributed algorithm that obtains the node sectorization under a given network flow with an approximation ratio of 2/3. Through extensive simulations, we evaluate the sectorization gain and the performance of the proposed algorithm in various network scenarios with varying network flows. The simulation results show that the approximate sectorization gain increases sublinearly as a function of the number of sectors per node.

Overlay Routing Over an Uncooperative Underlay

Yudi Huang (Penn State University), Ting He (Penn State University)

Abstract: Overlay network is a non-intrusive mechanism to enhance the existing network infrastructure by building a logical distributed system on top of a physical underlay. A major difficulty in operating overlay networks is the lack of cooperation from the underlay, which is usually under a different network administration. In particular, the lack of knowledge about the underlay topology and link capacities makes the design of efficient overlay routing extremely difficult. In contrast to existing solutions for overlay routing based on simplistic assumptions such as known underlay topology or disjoint routing paths through the underlay, we aim at systematically optimizing overlay routing without causing congestion, by extracting necessary information about the underlay from measurements taken at overlay nodes. To this end, we (i) identify the minimum information for congestion-free overlay routing, and (ii) develop polynomial-complexity algorithms to infer this information with guaranteed accuracy. Our evaluations in NS3 based on real network topology demonstrate notable performance advantage of the proposed solution over existing solutions.

3:00 - 3:30 PM     Coffee Break
3:30 - 4:40 PM     Session 5: Online Learning and Optimization
Session Chair: Chris Brinton (Purdue University)
Distributional-Utility Actor-Critic for Network Slice Performance Guarantee

Jingdi Chen (George Washington University), Tian Lan (George Washington University), Nakjung Choi (Nokia Bell Labs)

Abstract: Optimizing distributional utilities (such as mitigating performance tails and maximizing risk-aware objectives) is crucial for online network slice management to meet the diverse requirements of different services and applications. While Reinforcement Learning (RL) has been successfully applied to autonomous online decision-making in many network slice management problems, existing solutions often focus on maximizing the expected cumulative reward or are limited to specific distributional utilities. This paper proposes a new RL algorithm for general Distributional Utilities Optimization (DUO) in an actor-critic framework for online network slice management. In particular, we derive a DUO Temporal Difference Learning algorithm for updating distributional utilities in the critic through stochastic gradient descent. It is proven that the Distributional Optimal Bellman Operator for distributional utilities is a $\gamma$-contraction and thus is guaranteed to converge. In addition, we parameterize the policy by another neural network and prove a revised policy gradient theorem for distributional utilities, which shows that the derived policy update converges to at least a stationary point of the DUO problem. Our proposed algorithm works with arbitrary smooth utility functions on the return distributions, making it suitable for optimizing various network slice performance objectives in an online setting. Our solution is implemented and validated by building a hybrid trace-driven network simulator, which was built using an open-source O-RAN dataset, along with data collected from a 5G O-RAN testbed. Results demonstrate a significant improvement over heuristic and RL baselines.

Differentially Private Distributed Online Convex Optimization Towards Low Regret and Communication Cost

Jiandong Liu (University of Science and Technology of China), Lan Zhang (University of Science and Technology of China), Xiaojing Yu (University of Science and Technology of China), Xiang-Yang Li (University of Science and Technology of China)

Abstract: Distributed online convex optimization (DOCO) has emerged as a promising approach in scenarios where multiple learners collaboratively serve sequential (possibly untrusted) clients using AI models. However, ensuring clients' privacy and minimizing regret while keeping the communication cost reasonable poses a significant challenge. To address this issue, we propose private DOCO algorithms, termed PDOM, for both oblivious and stochastic settings. Our approach involves a mini-batch strategy that optimally balances the effects of slower model updates and differential privacy (DP) perturbation. Our theoretical analysis shows that compared to state-of-the-art algorithms, PDOM reduces the regret bounds and communication cost by an $\mathcal{O}(d/\epsilon)$ factor for the oblivious setting. For the stochastic setting, the impact of DP perturbation becomes negligible if the learning time $T=\Omega(d^4/(n\epsilon^4))$ provided that the loss functions are Lipschitz, convex, and smooth, where $d$ is the model dimension, $n$ is the number of learners, and $\epsilon$ is the privacy budget. Our evaluations validate these results, demonstrating that PDOM can reduce classification error rates of the state-of-the-art methods by up to $20\%$ in distributed online logistic regression tasks while achieving communication savings of above $90\%$.

Learning to Schedule in Non-Stationary Wireless Networks With Unknown Statistics

Quang Minh Nguyen (Massachusetts Institute of Technology), Eytan Modiano (Massachusetts Institute of Technology)

Abstract: The emergence of large-scale wireless networks with partially-observable and time-varying dynamics has imposed new challenges on the design of optimal control policies. This paper studies efficient scheduling algorithms for wireless networks subject to generalized interference constraint, where mean arrival and mean service rates are unknown and non-stationary. This model exemplifies realistic edge devices' characteristics of wireless communication in modern networks. We propose a novel algorithm termed MW-UCB for generalized wireless network scheduling, which is based on the Max-Weight policy and leverages the Sliding-Window Upper-Confidence Bound to learn the channels' statistics under non-stationarity. MW-UCB is provably throughput-optimal under mild assumptions on the variability of mean service rates. Specifically, as long as the total variation in mean service rates over any time period grows sub-linearly in time, we show that MW-UCB can achieve the stability region arbitrarily close to the stability region of the class of policies with full knowledge of the channel statistics. Extensive simulations validate our theoretical results and demonstrate the favorable performance of MW-UCB.

6:00 PM     Banquet (City Cruises)

Thursday, Oct 26, 2023

08:00 - 9:00 AM     Registration
09:00 - 10:00 AM     Special Session
Session Chair: Eytan Modiano (MIT)

Please join us as we celebrate the invaluable contributions by Prof. Anthony Ephremides to wireless communications and networking with a special session about Age of Information -- an active and growing area of research on new metrics and tools to quantify information freshness. These new metrics helped establish that low-latency networking is insufficient to guarantee information freshness, pointing to the need to integrate user and application requirements, as well as the meaning of information, into the design of information systems.

10:00 - 10:30 AM     Coffee Break
10:30 AM - 12:00 PM     Session 6: Learning on the Edge
Session Chair: Xiaowen Gong (Auburn University)
PRECISION: Decentralized Constrained Min-Max Learning with Low Communication and Sample Complexities

Zhuqing Liu (The Ohio State University), Xin Zhang (Iowa State University), Songtao Lu (IBM Thomas J. Watson Research Center), Jia Liu (The Ohio State University)

Abstract: Recently, min-max optimization problems have received increasing attention due to their wide range of applications in machine learning (ML). However, most existing min-max solution techniques are either single-machine or distributed algorithms coordinated by a central server. In this paper, we focus on the decentralized min-max optimization for learning with domain constraints, where multiple agents collectively solve a nonconvex-strongly-concave min-max saddle point problem without coordination from any server. Decentralized min-max optimization problems with domain constraints underpins many important ML applications, including multi-agent ML fairness assurance, and policy evaluations in multi-agent reinforcement learning. We propose an algorithm called PRECISION (proximal gradient-tracking and stochastic recursive variance reduction) that enjoys a convergence rate of $\mathcal{O}(1/T)$, where $T$ is the maximum number of iterations. To further reduce sample complexity, we propose PRECISION$^+$ with an adaptive batch size technique. We show that the fast $\mathcal{O}(1/T)$ convergence of PRECISION and PRECISION$^+$ to an $\epsilon$-stationary point imply $\mathcal{O}(\epsilon^{-2})$ communication complexity and $\mathcal{O}(m\sqrt{n}\epsilon^{-2})$ sample complexity, where $m$ is the number of agents and $n$ is the size of dataset at each agent. To our knowledge, this is the first work that achieves $\mathcal{O}(\epsilon^{-2})$ in both sample and communication complexities in decentralized min-max learning with domain constraints. Our experiments also corroborate the theoretical results.

Communication Resources Limited Decentralized Learning with Privacy Guarantee through Over-the-Air Computation

Jing Qiao (Shandong University), Shikun Shen (Shandong University), Shuzhen Chen (Shandong University), Xiao Zhang (Shandong University), Tian Lan (George Washington University), Xiuzhen Cheng (Shandong University), Dongxiao Yu (Shandong University)

Abstract: In this paper, we propose a novel decentralized learning algorithm, namely \emph{DLLR-OA}, for resource-constrained over-the-air computation with formal privacy guarantee. Theoretically, we characterize how the limited resources induced model-components selection error and compound communication errors jointly impact decentralized learning, making the iterates of \emph{DLLR-OA} converge to a contraction region centered around a scaled version of the errors. In particular, the convergence rate of the \emph{DLLR-OA} algorithm in the error-free case $\mathcal{O}(\frac{1}{\sqrt{nT}})$ achieves the state-of-the-arts. Besides, we formulate a power control problem and decouple it into two sub-problems of transmitter and receiver to accelerate the convergence of the \emph{DLLR-OA} algorithm. Furthermore, we provide quantitative privacy guarantee for the proposed over-the-air computation approach. Interestingly, we show that network noise can indeed enhance privacy of aggregated updates while over-the-air computation can further protect individual updates. Finally, the extensive experiments demonstrate that \emph{DLLR-OA} performs well in the communication resources constrained setting. In particular, numerical results on CIFAR-10 dataset shows nearly 30\% communication cost reduction over state-of-the-art baselines with comparable learning accuracy even in resource constrained settings.

Waste Not, Want Not: Service Migration-Assisted Federated Intelligence for Multi-Modality Mobile Edge Computing

Hansong Zhou (Florida State University), Shaoying Wang (Florida State University), Chutian Jiang (Florida State University), Xiaonan Zhang (Florida State University), Linke Guo (Clemson University), Yukun Yuan (University of Tennessee at Chattanooga)

Abstract: Future mobile edge computing (MEC) is envisioned to provide federated intelligence to delay-sensitive learning tasks with multi-modal data. Conventional horizontal federated learning (FL) suffers from high resource demand in response to complicated multi-modal models. Multi-modal FL (MFL), on the other hand, offers a more efficient approach for learning from multi-modal data. In MFL, the entire multi-modal model is split into several sub-models with each tailored to a specific data modality and trained on a designated edge. As sub-models are considerably smaller than the multi-modal model, MFL requires fewer computation resources and reduces communication time. Nevertheless, deploying MFL over MEC faces the challenges of device mobility and edge heterogeneity, which, if not addressed, could negatively impact MFL performance. In this paper, we investigate an \b{S}ervice \b{M}igration-assisted \b{M}obile \b{M}ulti-modal \b{F}ederated \b{L}earning (SM3FL) framework, where the service migration for sub-models between edges is enabled. To effectively utilize both communication and computation resources without extravagance in SM3FL, we develop the optimal strategies of service migration and data sample collection to minimize the wall-clock time, defined as the required training time to reach the learning target. Our experiment results show that the proposed SM3FL framework demonstrates remarkable performance, surpassing other state-of-art FL frameworks via substantially reducing the computing demand by 17.5% and dramatically decreasing the wall-clock time by 25.3%.

Scalable Multi-Modal Learning for Cross-Link Channel Prediction in Massive IoT Networks

Kun Woo Cho (Princeton University), Marco Cominelli (University of Brescia/CNIT), Francesco Gringoli (University of Brescia/CNIT), Joerg Widmer (IMDEA Networks), Kyle Jamieson (Princeton University)

Abstract: Tomorrow massive-scale IoT sensor networks are poised to drive uplink traffic demand, especially in areas of dense deployment. To meet this demand, however, network designers leverage tools that often require accurate estimates of Channel State Information (CSI), which incurs a high overhead and thus reduces network throughput. Furthermore, the overhead generally scales with the number of clients, and so is of special concern in such massive IoT sensor networks. While prior work has used transmissions over one frequency band to predict the channel of another frequency band on the same link, this paper takes the next step in the effort to reduce CSI overhead: predict the CSI of a nearby but distinct link. We propose Cross-Link Channel Prediction (CLCP), a technique that leverages multi-view representation learning to predict the channel response of a large number of users, thereby reducing channel estimation overhead further than previously possible. CLCP’s design is highly practical, exploiting channel estimates obtained from existing transmissions instead of dedicated channel sounding or extra pilot signals. We have implemented CLCP for two different Wi-Fi versions, namely 802.11n and 802.11ax, the latter being the leading candidate for future IoT networks. We evaluate CLCP in two large-scale indoor scenarios involving both line-of-sight and non-line-of-sight transmissions with up to 144 different 802.11ax users and four different channel bandwidths, from 20 MHz up to 160 MHz. Our results show that CLCP provides a 2× throughput gain over baseline and a 30% throughput gain over existing prediction algorithms.

12:00 - 1:30 PM     Lunch
1:30- 3:00 PM     Session 7: Security and Sensing
Session Chair: Linke Guo (Clemson University)
Wave-for-Safe: Multisensor-based Mutual Authentication for Unmanned Delivery Vehicle Services

Huanqi Yang (City University of Hong Kong), Mingda Han (City University of Hong Kong), Shuyao Shi (The Chinese University of Hong Kong), Zhenyu Yan (The Chinese University of Hong Kong), Guoliang Xing (The Chinese University of Hong Kong), Jianping Wang (City University of Hong Kong), Weitao Xu (City University of Hong Kong), Shuyao Shi (The Chinese University of Hong Kong)

Abstract: In recent years, the deployment of unmanned vehicle delivery services has increased unprecedentedly, leading to a need for enhanced security due to the risk of leaving high-value packages to an unauthorized third party during pickup or delivery. Existing authentication methods such as QR code and one-time password are inadequate, as they are susceptible to attacks and provide only one-way authentication. This paper, for the first time to our best knowledge, proposes Wave-for-Safe (W4S) - a novel mutual authentication system that utilizes multi-modal sensors on both the user's smartphone and the unmanned vehicle. W4S uses random hand-waving of the legitimate user to achieve robust authentication by obtaining highly correlated sensory data measured by the Inertial Measurement Unit (IMU) in the smartphone and sensors in the unmanned vehicle (e.g., mmWave radar and camera). We propose several novel methods to overcome challenges, such as heterogeneous data processing, asynchronization, and imitating attacks. The prototype is implemented on an unmanned vehicle and various smartphones, and evaluation in different real-world scenarios shows that W4S achieves an equal error rate below 0.013 against various attacks.

EarCase: Sound Source Localization Leveraging Mini Acoustic Structure Equipped Phone Cases for Hearing-challenged People

Xin Li (Rutgers University), Yilin Yang (Rutgers University), Zhengkun Ye (Temple University), Yan Wang (Temple University), Yingying Chen (Rutgers University)

Abstract: Sound source localization is vital for daily tasks such as communication or navigating environments. However, millions of adults struggle with hearing impairment, which limits their ability to identify the direction and distance of sound sources. Traditional methods for sound spatial sensing, such as microphone arrays, are not suitable for resource-constrained IoT devices like smartphones due to power consumption or hardware complexity. To overcome these limitations, this paper proposes EarCase, an alternative scheme that utilizes commercial smartphones with only two microphones to recognize 3D acoustic spatial information. EarCase draws inspiration from the human auditory system, where two ears amplify minute differences in acoustic signals to help pinpoint sound sources. This ability can be regarded as a response function trained through a large amount of sound source information, which can be used to extract spectral cues from a sound source position to the ears drums. We imitate this effect by designing a smartphone case with perforated mini-structures covering the microphones to help the smartphone infer the location of the sound source. Sound waves that pass through the mini-structure will undergo unique changes in diffraction at the hole, amplifying directional information similar to ears. Our scheme uses the top and bottom microphones to eliminate noises and multi-path effects, making the design robust to different sound sources in varying environments. By using only built-in microphones and low-cost phone cases, EarCase provides an accessible tool to enhance the quality of life for hearing impaired individuals. Extensive experimental results show that EarCase achieves high accuracy in localizing sounds, with a mean error of $3.7^{\circ}$ at a distance of 200cm and ~96\% accuracy for real-world sounds (e.g., car horns).

Optimizing Utility-Energy Efficiency for the Metaverse over Wireless Networks under Physical Layer Security

Jun Zhao (Nanyang Technological University), Xinyu Zhou (Nanyang Technological University), Yang Li (Nanyang Technological University), Liangxin Qian (Nanyang Technological University)

Abstract: The Metaverse, an emerging digital space, is expected to offer various services mirroring the real world. Wireless communications for mobile Metaverse users should be tailored to meet the following user characteristics: 1) emphasizing application-specific perceptual utility instead of simply the transmission rate, 2) concerned with energy efficiency due to the limited device battery and energy intensiveness of some applications, and 3) caring about security as the applications may involve sensitive personal data. To this end, this paper incorporates application-specific utility, energy efficiency, and physical-layer security (PLS) into the studied optimization in a wireless network for the Metaverse. Specifically, after introducing utility-energy efficiency (UEE) to represent each Metaverse user’s application-specific objective under PLS, we formulate an optimization to maximize the network’s weighted sum-UEE by deciding users’ transmission powers and communication bandwidths. The formulated problem belongs to the sum-of-ratios optimization, for which prior studies have demonstrated its difficulty. Nevertheless, our proposed algorithm 1) obtains the global optimum for the weighted sum-UEE optimization, via a transform to parametric convex optimization problems, 2) applies to any utility function which is concave, increasing, and twice differentiable, and 3) achieves a linear time complexity in the number of users (the optimal complexity in the order sense). Simulations confirm the superiority of our algorithm over other approaches. We explain that our technique for solving the sum-of-ratios optimization is applicable to other optimization problems in wireless networks and mobile computing.

Swirls: Sniffing Wi-Fi Using Radios with Low Sampling Rates

Zhihui Gao (Duke University), Yiran Chen (Duke University), Tingjun Chen (Duke University)

Abstract: Next-generation Wi-Fi systems embrace large signal bandwidth to achieve significantly improved data rates, while requiring efficient methods for network monitoring and spectrum sharing applications. A radio receiver (RX) operating at low sampling rates can largely improve the energy- and cost-efficiency in such systems if it can extract useful network information such as the duration and structure of wireless packets. In this paper, we present the design of Swirls, a novel framework for sniffing Wi-Fi Physical layer information using RXs operating at sampling rates that are (much) smaller than the signal bandwidth. Swirls consists of three modules tailored for low sampling rate RXs: joint packet detection, optimized RX frequency selection, and packet property decoder. We implement Swirls using three software-defined radio platforms and extensively evaluate Swirls in real-world scenarios. The experiments show that for 20/40 MHz 802.11n packets, Swirls with 5 MHz sampling rate can achieve a mean absolute error (MAE) of transmission time and physical service data unit length decoding of 0.06 ms and 1.91 kB, respectively, at only 10 dB signal-to-noise ratio. With the same setting, Swirls simultaneously achieves a classification accuracy for the modulation and coding scheme, number of spatial streams, and bandwidth of 95.3%, 96.1%, and 95.6%, respectively. In an extreme case for 160 MHz 802.11ac/ax packets, Swirls with 2.5 MHz sampling rate (i.e., a downsampling ratio of 64) can still achieve an MAE of transmission time decoding of 0.47/0.67 ms.

3:00 - 3:30 PM     Coffee Break
3:30 - 4:40 PM     Session 8: Internet of Things
Session Chair: Xiaonan Zhang (Florida State University)
SRLoRa: Neural-enhanced LoRa Weak Signal Decoding with Multi-gateway Super Resolution

Jialuo Du (Tsinghua University), Yidong Ren (Michigan State University), Zhui Zhu (Tsinghua University), Chenning Li (Michigan State University), Zhichao Cao (Michigan State University), Qiang Ma (Tsinghua University), Yunhao Liu (Tsinghua University)

Abstract: LoRa and its enabled LoRa wide-area network (LoRaWAN) have been seen as an important part of the next-generation network for massive Internet-of-Things (IoT). Due to LoRa's low-power and long-range nature, LoRa signals are much weaker than the noise floor, particularly in complex urban or semi-indoor environments. Therefore, weak signal decoding is critical to achieving the desired wide-area coverage in general. Existing work has shown the advantages of exploring deep neural networks (DNN) for weak signal decoding. However, the existing single-gateway based DNN decoder is hard to fully leverage the spatial information in multi-gateway scenarios. In this paper, we propose SRLoRa, an efficient DNN LoRa decoder that fully utilizes the spatial information from multiple gateways to decode extremely weak LoRa signals. Specifically, we design interleaving denoising and merging layers to improve signal quality at ultra-low SNR. We develop efficient merging on feature maps extracted by denoising DNNs to tolerate time misalignments among different signals. We define max and min operations in the merging layer to efficiently extract salient features and reduce noise, merging the features extracted from multiple gateways to guide future DNN layers to gradually improve signal quality. We implement SRLoRa with USPR N210 and commercial LoRa nodes and evaluate its performance indoors and outdoors. The results show that with four gateways, SRLoRa achieves SNR gain at 4.53-4.82 dB, which is 2.51$\times$ of Charm, leading to a 1.84$\times$ coverage area compared to standard LoRa in an urban deployment.

mReader: Concurrent UHF RFID Tag Reading

Hossein Pirayesh (Michigan State University), Shichen Zhang (Michigan State University), Huacheng Zeng (Michigan State University)

Abstract: UHF RFID tags have been widely used for contactless inventory and tracking applications. One fundamental problem with RFID readers is their limited tag reading rate. Existing RFID readers (e.g., Impinj Speedway) can read about 35 tags per second in a read zone, which is far from enough for many applications. In this paper, we present the first-of-its-kind RFID reader (mReader), which borrows the idea of multi-user MIMO (MU-MIMO) from cellular networks to enable concurrent multi-tag reading in passive RFID systems. mReader is equipped with multiple antennas for implicit beamforming in downlink transmissions. It is enabled by three key techniques: uplink collision recovery, transition-based channel estimation, and zero-overhead channel calibration. In addition, mReader employs a Q-value adaptation algorithm for medium access control to maximize its tag reading rate. We have built a prototype of mReader on USRP X310 and demonstrated for the first time that a two-antenna reader can read two commercial off-the-shelf (COTS) tags simultaneously. Numerical results further show that mReader can improve the tag reading rate by 45% compared to existing readers.

LeakageScatter: Backscattering LiFi-leaked RF Signals

Muhammad Sarmad Mir (Universidad Carlos III de Madrid), Minhao Cui (University of Massachusetts Amherst), Borja Genoves Guzman (IMDEA Networks Institute), Qing Wang (Delft University of Technology), Jie Xiong (University of Massachusetts Amherst), Domenico Giustiniano (IMDEA Networks Institute)

Abstract: Radio-Frequency (RF) backscatter has emerged as a low-power communication technique. Backscatter systems either rely on active signal generators (spectrum efficient, but dedicated infrastructure) or existing ambient wireless transmissions (existing infrastructure, but spectrum inefficient). In this paper, we aim to make RF backscatter spectrum efficient and at the same time work with existing infrastructure. We propose to leverage the deployment of LiFi networks built upon LED bulbs for pervasive RF backscatter. We experimentally demonstrate that LiFi, which passively leaks RF signals, can be exploited as a radio carrier generator for low-power RF backscatter. We further design **LeakageScatter**, the first backscatter system operating in the ISM band and exploiting LiFi-leaked RF signals, without the need to actively generate the carrier wave. We customize the design of the loop at the LiFi transmitter, as well as the coil antennas at the tag and RF backscatter receiver, to optimize the system performance. We propose to opportunistically enable the oscillator of the backscatter tag in the software that could reduce the energy consumption on backscattering by up to 75\%. Experimental results show that LeakageScatter achieves a backscattering distance up to 10 m and 18 m in indoor and outdoor scenarios, respectively, without using a dedicated RF carrier generator.