Monday, Oct 27, 2025
8:00 - 17:00
Registration
(Anderson-Clarke Center)
8:30 - 9:00
Opening Remarks
Room: Hudspeth Auditorium
Abstract: Next generation wireless communication systems are increasingly capable of providing accurate sensing in addition to connectivity. By reusing spectrum and infrastructure, these systems can provide capabilities traditionally associated with radar, including precise localization and motion tracking. This keynote discusses the evolution of wireless sensing from simple device localization to advanced imaging in distributed multi-static wireless systems. The talk will cover how carrier-phase information enables fine-grained positioning, how time and frequency offsets can be handled for accurate sensing among unsynchronized devices, how sparse frequency bands can be combined to increase resolution, and how receivers can cooperate to form synthetic apertures for coherent imaging of dynamic scenes. We will emphasize experimental testbeds and real-world demonstrations, highlighting both practical challenges and opportunities in turning communication networks into pervasive sensing infrastructures.
Bio: Joerg Widmer is Research Professor and Research Director of IMDEA Networks in Madrid, Spain, where he leads the Wireless Networking Research Group. Previously, he worked at DOCOMO Euro-Labs in Munich, Germany and EPFL, Switzerland, and has held visiting researcher positions at ICSI Berkeley (USA), University College London (UK), and TU Darmstadt (Germany). His research focuses on wireless networks, ranging from extremely high frequency millimeter-wave communication and wireless sensing to mobile network architectures. Joerg Widmer authored over 250 conference and journal papers, and holds 14 patents. He was awarded an ERC consolidator grant, the Friedrich Wilhelm Bessel Research Award of the Alexander von Humboldt Foundation, a Mercator Fellowship of the German Research Foundation, a Spanish Ramon y Cajal grant, as well as 16 best paper awards. He is an IEEE Fellow and Distinguished Member of the ACM.
10:00 - 10:30
Coffee Break
(Commons)
10:30 - 12:00
Session 1: Next-Generation Radio
Session Chair: Xiaowen Gong
Room: Hudspeth Auditorium
Paper
Yifan Yan, Shuai Yang, Xiuzhen Guo, Xiangguang Wang, Wei Chow, Yuanchao Shu, Shibo He (Zhejiang University)
Abstract: Millimeter-wave (mmWave) sensing technology holds significant value in human-centric applications, yet the high costs associated with data acquisition and annotation limit its widespread adoption in our daily lives. Concurrently, the rapid evolution of large language models (LLMs) has opened up opportunities for addressing complex human needs. This paper presents mmExpert, an innovative mmWave understanding framework consisting of a data generation flywheel that leverages LLMs to automate the generation of synthetic mmWave radar datasets for specific application scenarios, thereby training models capable of zero-shot generalization in real-world environments. Extensive experiments demonstrate that the data synthesized by mmExpert significantly enhances the performance of downstream models and facilitates the successful deployment of large models for mmWave understanding.
Paper
K M Rumman (Northeastern University); Francesca Meneghello (Northeastern University); Khandaker Foysal Haque (Northeastern University); Francesco Gringoli (University of Brescia); Francesco Restuccia (Northeastern University)
Abstract: The performance of multiple-input, multiple-output (MIMO) systems highly depends on the precision of channel estimates provided by the mobile users. However, the current Wi-Fi standard requires an update interval of 10 ms, irrespective of the channel dynamics.This imposes a substantial overhead for the MIMO channel estimation. Recent work mainly targets different compression strategies, potentially compromising precoding accuracy and, in turn, the network performance. In stark opposition, we propose SHRINK, a framework to dynamically adapt the feedback transmission rate to the propagation environments and performance requirements. SHRINK determines whether the users should send back their channel estimates by predicting network performance through a data-driven analysis of prior and current channel estimates. We have experimentally evaluated SHRINK using off-the-shelf Wi-Fi devices in multiple environments, including an anechoic chamber, and benchmarked its performance against several state-of-the-art approaches. Experimental results show that SHRINK reduces airtime and data overhead by 81% on average compared to the IEEE 802.11 standard without impacting the precoding performance. Moreover, SHRINK outperforms state-of-the-art approaches by an average gain of 33.6% in airtime and data overhead reduction, corresponding to an increase in throughput of 24.5%.
Paper
Phuc Dinh, Yufei Feng, Eduardo Baena (Northeastern University); Yunmeng Han (University of Cincinnati); Weiming Qi (Michigan State University); Zihan Xu (Virginia Tech); Moinak Ghoshal, Pau Closas, Dimitrios Koutsonikolas (Northeastern University); Joerg Widmer (IMDEA Networks)
Abstract: Accurate localization in dense urban areas remains a significant challenge due to the limitations of Global Navigation Satellite Systems (GNSS) in environments with obstacles and reflections, such as urban canyons. While the most recent 3GPP standards offer sophisticated network-centric positioning techniques, their widespread deployment will take time and is hindered by high infrastructure costs and complexity. In this work, we present mm-NOLOC, a UE-centric localization system, designed as a practical fallback when GNSS fails to deliver high accuracy, that leverages the growing deployment of 5G mmWave infrastructure in dense urban areas. Unlike traditional approaches, mm-NOLOC operates independently of 3GPP location support and utilizes only standardized control-plane information collected solely on the UE side – Synchronization Signal Block (SSB) Indices that are mapped to 5G mmWave beam directions – to obtain robust position estimations. To address the uncertainty introduced by urban multipath, mm-NOLOC models the SSB-to-angle relationship as a discrete and multimodal distribution, based on empirical measurements in operational 5G mmWave networks, and uses a particle filter to refine position estimates by integrating probabilistic observations with UE-side motion dynamics. We validate mm-NOLOC through experiments over commercial 5G mmWave deployments, as well as trace-based simulations. Our results show that mm-NOLOC achieves a median localization error below 3 m and a 95th percentile error below 10 m, offering a practical fallback localization solution in urban canyon scenarios for 5G networks without network location support.
Paper
Francesco Pessia, Sazzad Sayyed, Francesco Restuccia (Northeastern University)
Abstract: Enabling spectrum perception through deep neural networks (DNNs)
directly connected to the radio front-end is of fundamental impor-
tance to realize next-generation spectrum-aware wireless systems.
As such, low-latency DNN inference in reconfigurable hardware
such as Field Programmable Gate Arrays (FPGAs) is a necessary
precursor to enable spectrum perception in real-world wireless
systems. The key issue with existing work is that it considers DNNs
that have fixed weights and architecture. On the other hand, it has
been shown that dynamically changing the structure and weights of
the DNN at runtime can lead to improved efficiency and adaptabil-
ity. This work fills the current research gap by proposing FlexRFML,
the first framework to integrate dynamic DNNs in the RF-front
spectrum perception loop. FlexRFML includes High-Level Synthesis
(HLS)-based design as well as customized circuits to achieve dy-
namic hardware reconfiguration and accelerate the DNN. We have
prototyped FlexRFML on a Xilinx system-on-chip (SoC) ZCU102 by
considering both modulation recognition and radio fingerprinting
classification problems where the DNNs are dynamically adapted
based on a preliminary classification of the input. Experimental
results show that FlexRFML can decrease the inference latency by
up to 35.6% with respect to static DNN inference with negligible
additional hardware overhead. We pledge to release the FlexRFML
hardware and software code.
12:00 - 13:30
Lunch
(Commons)
13:30 - 15:00
Session 2: LoRa & Interference Management
Session Chair: Marcos Vasconcelos
Room: Hudspeth Auditorium
Paper
Qiling Xu (City University of Hong Kong); Binbin Xie (University of Texas at Arlington); Jie Xiong (Nanyang Technological University); Lu Wang (Shenzhen University); Zhimeng Yin (City University of Hong Kong)
Abstract: LoRa technology holds great promise for wide-area wireless sensing, owing to its long-range connectivity and strong penetration capability. However, existing LoRa sensing systems face a fundamental limitation, i.e., they assume that the gateway only receives signals from the sensing node, without any interference from other communication nodes. This is because transmissions from communication nodes inevitably distort the sensing pattern extracted from the received sensing signal, leading to sensing failure. To address this challenging issue, we propose InterSen, a novel solution specifically designed to address communication interference on LoRa sensing. We conduct an in-depth analysis of how communication interference disrupts LoRa sensing and design innovative signal processing methods to recover the sensing pattern corrupted by interference. We evaluate the performance of InterSen across three real-world sensing applications, i.e., respiration monitoring, walking sensing, and gesture recognition. Comprehensive experiments demonstrate that InterSen achieves accurate sensing even in the presence of multiple communication nodes. This brings LoRa sensing one step closer to practical deployment in real-world scenarios.
Paper
Md Ashikul Haque, Abusayeed Saifullah (The University of Texas at Dallas, US)
Abstract: This paper addresses the vulnerability of LoRa communications against attackers transmitting LoRa packet and proposes an effective anti-jamming technique. Mitigating jamming in a LoRa network is extremely challenging as the devices have low computation power and limited energy. The state-of-the-art work addresses this type of jamming by exploiting Received Signal Strength Indicator (RSSI). It is effective only against a single jammer and is ineffective against multiple jammers transmitting LoRa packets with coordinated timing. The variability in arrival times of jamming LoRa packets results in differing RSSI, rendering the technique impractical against multiple jammers. In this paper, we propose a new technique to handle jamming when multiple attackers transmit LoRa packet simultaneously. Our idea is to implicitly synchronize all the LoRa symbols from different packets to ensure the jammers' energy in FFT bins remain distinguishable. This method is link layer-agnostic, entails no overhead at the LoRa nodes, and enables packet decoding even when facing attacks from a single jammer or multiple jammers on a channel, effectively combating reactive jamming. We have implemented our anti-jamming system at the LoRa gateway and conducted experiments under various jamming scenarios on LoRa nodes. The results show that our anti-jamming technique improves packet reception rate and per packet energy consumption by up to 106.56 and 135.15 times under collaborative jamming.
Paper
Runting Zhang (Shanghai Jiao Tong University); Yijie Li (National University of Singapore); Dian Ding, Hao Pan (Shanghai Jiao Tong University); Yongzhao Zhang (UESTC); Xiaoyu Ji (Zhejiang University); Jiadi Yu, Guangtao Xue, Yi-Chao Chen (Shanghai Jiao Tong University)
Abstract: Bluetooth Low Energy (BLE) direction finding, a feature introduced in BLE version 5.1, enables precise localization through Angle of Arrival (AoA) estimation. However, this advancement introduces new risk to BLE direction finding based localization system. Specifically, the AoA estimation based on phase sampling of constant-tone-extension (CTE) is susceptible to the signal injection attack. This paper presents SaDiF, a feasible spoofing attack mechanism to mislead the locators into mistaking the positioning result as a continuous path. By eavesdropping on BLE packets and injecting attack signals containing pre-designed disturbing phase shift, SaDiF subtly alters the AoA estimation without detection, thus interfere the localization results. Moreover, SaDiF address the challenges posed by hardware imperfections by proposing an injection timing optimization to improve attack robustness. Extensive experiments demonstrates the effectiveness of SaDiF in successfully attacking multiple BLE targets in real-time scenarios. In conclusion, our findings reveal critical security risks in BLE direction finding feature and provide insights into strengthening its defenses.
Paper
Neagin Neasamoni Santhi, Davide Villa, Michele Polese, Tommaso Melodia (Northeastern University)
Abstract: Ultra-dense fifth generation (5G) networks leverage spectrum sharing and frequency reuse to enhance throughput, but face unpredictable in-band uplink (UL) interference challenges that significantly degrade Signal to Interference plus Noise Ratio (SINR) at affected Next Generation Node Bases (gNBs). This is particularly problematic at cell edges, where overlapping regions force User Equipments (UEs) to increase transmit power, and in directional millimeter wave systems, where beamforming sidelobes can create unexpected interference. The resulting signal degradation disrupts protocol operations, including scheduling and resource allocation, by distorting quality indicators like Reference Signal Received Power (RSRP) and Received Signal Strength Indicator (RSSI), and can compromise critical functions such as channel state reporting
and Hybrid Automatic Repeat Request (HARQ) acknowledgments. In this paper, we present InterfO-RAN, a real-time programmable solution that leverages a Convolutional Neural Network (CNN) to process In-phase and Quadrature (I/Q) samples in the gNB physical layer, detecting in-band interference with accuracy exceeding 91% in under 650 ùúás. InterfO-RAN represents the first O-RAN dApp accelerated on Graphics Processing Unit (GPU), coexisting with the 5G NR physical layer processing of NVIDIA Aerial. Deployed in an end-to-end private 5G network with commercial Radio Units (RUs) and smartphones, our solution was trained and tested on more than 7 million NR UL slots collected from real-world environments, demonstrating robust interference detection capabilities essential for maintaining network performance in dense deployments.
15:00 - 15:30
Coffee Break
(Commons)
15:30 - 17:00
Session 3: Beamforming
Session Chair: Francesco Restuccia
Room: Hudspeth Auditorium
Paper
Jy-Chin Liao, Burak Bilgin, Edward Knightly (Rice University)
Abstract: In theory, sub-terahertz (sub-THz) communication offers the potential for near-filed \emph{beam focusing}, that can far surpass conventional far-field \emph{beam steering} by directing energy to a specific point in space rather than a broad angular direction. In this paper, we introduce \textit{TeraFocus}, the first adaptive beam focusing system for sub-THz wideband communication. Moreover, we present the first experimental demonstration of a wideband focusing link, in contrast to prior experiments with monochromatic sources.
With \textit{TeraFocus}, we demonstrate how metasurfaces with thousands of radiating elements (81 by 81 for a total of 6,561), can create the non-linear phase gradient required for precise beam focusing, achieving a 60 GHz 3-dB bandwidth. Moreover, \textit{TeraFocus} maximizes link capacity by addressing the inherent spectral asymmetry in metasurface responses. Finally, for mobile clients, \textit{TeraFocus} dynamically compensates for misalignment by leveraging the frequency-dependent focal length characteristic, allowing it to adapt to radial mobility without relying on traditional localization algorithms, which can be time-consuming, computationally expensive, and error-prone. Instead, \textit{TeraFocus} re-localizes the client by analyzing the received spectral signature of a misaligned focusing beam, achieving a maximum error of less than 0.2 cm.
Paper
Aadesh Madnaik, Karthikeyan Sundaresan (Georgia Institute of Technology)
Abstract: Millimeter wave (mmWave) radar sensing is essential for Integrated Sensing and Communication (ISAC) but is limited by the need for line-of-sight (LoS) in environments with obstacles. However, existing non-line-of-sight (NLoS) solutions, including those leveraging reconfigurable intelligent surfaces (RIS) are computationally demanding, unable to generalize to new surroundings or require tight synchronization/coordination that prevent practical deployments. We propose PRISM, a framework that brings the diversity of a RIS to monostatic radars to enhance target resolution in NLoS, all without requiring real-time coordination. PRISM uses spatial modulation at the RIS to encode targets' angular information through frequency shifts by creating a time-variant channel for improved detection at the radar. Key contributions include a wideband spatial modulation framework, a genetic algorithm to efficiently solve for the RIS configuration to enable such angle-dependent frequency shifts, and a narrowband approximation method to reduce the computational overhead. Evaluations show PRISM benefits commodity radars by reducing median localization error to 31 cm and maintaining 80% target detection accuracy for up to six targets in NLoS compared to 103 cm error and 66% accuracy with prior art, while providing a 12x reduction in computation time.
Paper
Kubra Alemdar (Northeastern University); Arnob Ghosh (New Jersey Institute of Technology); Vini Chaudhary (Mississippi State University); Ness Shroff (The Ohio State University); Kaushik Chowdhury (University of Texas Austin)
Abstract: Mobile Robots (MRs), typically equipped with single-antenna radios, face many challenges in maintaining reliable connectivity established by multiple wireless access points (APs). These challenges include the absence of direct line-of-sight (LoS), ineffective beam searching due to the time-varying channel, and interference constraints. This paper presents REMARKABLE, an online learning based adaptive beam selection strategy for robot connectivity that trains kernelized bandit model directly in real-world settings of a factory floor. REMARKABLE employs reconfigurable intelligent surfaces (RISs) with passive reflective elements to create beamforming toward target robots, eliminating the need for multiple APs. We develop a method to create a beamforming codebook, reducing the search space complexity. We also develop a reconfigurable rotational mechanism to expand RIS coverage by rotating its projection plane. To address non-stationary conditions, we adopt the bandit over bandit idea that employs adaptive restarts, allowing the system to forget outdated observations and safely relearn the optimal interference-constrained beam. We show that our approach achieves a dynamic regret and the violation bound of ÀúO(ùëá 3/4ùêµ1/4) where ùëá is the total time, and ùêµ is the total variation budget which captures the total changes in the environment without even assuming the knowledge of ùêµ. Finally, experimental validation with custom-designed RIS hardware and mobile robots demonstrates 46.8% faster beam selection and 94.2% accuracy, outperforming classical methods across diverse mobility settings.
Paper
Ish Kumar Jain (RPI, UC San Diego); Rohith Reddy Vennam, Dinesh Bharadia (UC San Diego)
Abstract: The next generation of 6G networks aims to utilize ultra-wideband spectrum and massive antenna arrays to serve multiple users with both control and data channels at low latency and high efficiency. However, phased arrays at mmWave and mid-bands are fundamentally constrained to a single beam or suffer sharp beamforming loss when split across directions, limiting simultaneous control-data support. In FlexLink, we introduce and prototype a novel delay-phased array architecture that overcomes this limitation by redistributing energy jointly across frequency and space, enabling multiple narrow beams without sacrificing per-beam gain or requiring additional power. We design and prototype FlexLink on a custom 4-7 GHz hardware testbed, demonstrating for the first time that control and data beams can be decoupled in practice, achieving nearly double spectral efficiency compared to conventional phased arrays.
Room: Houston Museum of Natural Science (Direction)
Tuesday, Oct 28, 2025
08:00 - 17:00
Registration
(Anderson-Clarke Center)
08:30 - 10:00
Session 4: QoS & Network Slicing
Session Chair: Koushik Kar
Room: 08:30 - 10:00
Paper
Quang Nguyen (Massachusetts Institute of Technology); Eytan Modiano (MIT)
Abstract: We propose a framework for resource provisioning with QoS guarantees in shared infrastructure networks. Our novel framework provides tunable probabilistic service guarantees for throughput and delay. Key to our approach is a Modified Dirft-plus-Penalty (MDP) policy that ensures long-term stability while capturing short-term probabilistic service guarantees using linearized upper-confidence bounds. We characterize the feasible region of service guarantees and show that our MDP procedure achieves mean rate stability and an optimality gap that vanishes with the frame size over which service guarantees are provided. Finally, empirical simulations validate our theory and demonstrate the favorable performance of our algorithm in handling QoS in multi-infrastructure networks.
Paper
Duo Cheng (Virginia Tech); Ramanujan K Sheshadri, Ahan Kak, Nakjung Choi (Nokia Bell Labs); Xingyu Zhou (Wayne State University); Bo Ji (Virginia Tech)
Abstract: Network slicing plays a crucial role in realizing 5G/6G advances, enabling diverse Service Level Agreement (SLA) requirements related to latency, throughput, and reliability. Since network slices are deployed end-to-end (E2E), across multiple domains including access, transport, and core networks, it is essential to efficiently decompose an E2E SLA into domain-level targets, so that each domain can provision adequate resources for the slice. However, decomposing SLAs is highly challenging due to the heterogeneity of domains, dynamic network conditions, and the fact that the SLA orchestrator is oblivious to the domain's resource optimization. In this work, we propose Odin, a Bayesian Optimization-based solution that leverages each domain's online feedback for
provably-efficient SLA decomposition. Through theoretical analyses and rigorous evaluations, we demonstrate that Odin's E2E orchestrator can achieve up to 45% performance improvement in SLA satisfaction when compared with baseline solutions whilst reducing overall resource costs even in the presence of noisy feedback from the individual domains.
Paper
Shuwen Liu, Craig A. Shue (Worcester Polytechnic Institute)
Abstract: The function-as-a-service (FaaS) paradigm, often called "serverless computing," allows computing providers to scalably perform short-lived computational tasks at low cost. While the benefits of FaaS have been demonstrated for ephemeral computational tasks, the research community has not fully explored the applicability of FaaS platforms for longer-term, periodic tasks, like network controllers.
In this work, we explore the fusion of FaaS platforms with the controllers used in the software-defined networking (SDN) paradigm. Traditional SDN controllers are conceived as long-lived, logically-centralized, critical infrastructure for the networks they control. While such "always on" functionality would be a poor fit for the FaaS model, we re-envision the SDN controller as a small, distributed, and tailored entity that handles a single PACKET_IN request per instance. We explore this approach with edge computing environments from content delivery network (CDN) providers. In a CDN deployment, we observed a median PACKET_IN response time of 15.8 ms from a residential host. The controllers can scalably support both stateful and stateless policy using provider databases and can support geographically-distributed organizations while providing logically-centralized management.
Paper
Xun Wang, Zhuoran Li, Longbo Huang (Tsinghua University)
Abstract: Multi-user delay-constrained scheduling is a critical challenge in real-world applications such as embodied AI and modern network systems, where efficient resource allocation is required among users with diverse delay sensitivities. While deep reinforcement learning (DRL) has shown superior performance over traditional methods in complex environments, existing approaches typically assume a static user population. This limits their scalability, as user population changes necessitate costly retraining. To address this limitation, we propose Heterogeneous Embedding Multi-Agent Reinforcement Learning (HEMA), a novel multi-agent DRL algorithm that maps variable-shaped user features into a shared embedding space and employs a parameter-shared recurrent neural network to compute individual value functions. Following the centralized training with decentralized execution paradigm and a carefully designed online adaptation mechanism, HEMA efficiently adapts to dynamic user populations, adjusts its allocation strategy on the fly, and achieves high throughput under resource constraints. Extensive experiments show that HEMA outperforms the state-of-the-art DRL baseline by up to \textbf{32.1\%} in average throughput under static population settings, and achieves up to \textbf{31.8\%} higher steady-state throughput than traditional methods in dynamic environments, demonstrating strong effectiveness and better generalizability.
10:00 - 10:30
Coffee Break
(Commons)
10:30 - 12:00
Session 5: Federated & Distributed Learning
Session Chair: Ming Shi
Room: Hudspeth Auditorium
Paper
Xinghan Gong, Xiaowen Gong (Auburn University); Ying Sun (The Pennsylvania State University); Shiwen Mao (Auburn University)
Abstract: Decentralized federated learning (DFL) can greatly reduce communication costs due to its decentralized communication structure compared to traditional centralized federated learning (FL). Existing works on FL with partial client participation often considered idealized scenarios (such as all clients participate in a round with the same probability), or required using clients' past gradient/model information which can be too costly to implement, or focused on centralized FL. In this paper, we study lightweight decentralized federated learning that does not use any client’s past gradient/model information. We first present a novel sample-path-based cyclic convergence analysis for lightweight DFL with arbitrary client participation for the non-convex objectives case. The cyclic convergence analysis bounds clients’ local model drifts due to partial participation over multiple rounds within a cycle and the cyclic consensus error via a per-cycle descent approach, while capturing the effect of client participation through a single unified term. By analyzing this term, we propose Cyclic Decentralized Federated Learning (CDFL), which enables general cyclic client participation by requiring only that each client performs the same total number of local updates per cycle. Our results show that CDFL achieves a convergence rate that matches existing benchmarks. We further propose a cyclic control framework that is both training-round and energy efficient to adaptively select participating clients and determine their number of local updates. Numerical experiments using real-world datasets verify our theoretical results and demonstrate the effectiveness of CDFL and the adaptive cyclic control framework.
Paper
Faeze Moradi Kalarde, Ben Liang (University of Toronto); Min Dong (Ontario Tech University); Yahia A. Eldemerdash Ahmed, Ho Ting Cheng (Ericsson Canada)
Abstract: This work addresses the trade-off between convergence and the overall delay in heterogeneous distributed learning systems, where the devices encounter diverse and dynamic communication conditions. We propose to apply adaptive sparsification across the devices and over iterations, formulating an optimization problem to minimize the overall delay while ensuring a specified level of convergence.
The resultant stochastic optimization problem cannot be handled by conventional Lyapunov optimization techniques due to the dependency of the per-iteration objective function on the previous iterations. To overcome this challenge, we propose AdaSparse, an online algorithm with a novel per-slot problem that can be solved optimally by searching over a finite discrete space. We further introduce a low-complexity approximation of AdaSparse, termed LC-AdaSparse, which features linear computational complexity and diminishing approximation error. We show that AdaSparse offers strong performance guarantees, simultaneously achieving sub-linear dynamic regret in terms of delay and the optimal rate in terms of convergence.
Numerical experiments on classification tasks using standard datasets and various models demonstrate that our approach effectively reduces the communication delay compared with existing benchmarks, to achieve the same levels of learning accuracy.
Paper
Jiahao Xu, Zikai Zhang, Rui Hu (University of Nevada, Reno)
Abstract: Traditional backdoor attacks in federated learning (FL) operate within constrained attack scenarios, as they depend on visible triggers and require physical modifications to the target object, which limits their practicality. To address this limitation, we introduce a novel backdoor attack prototype for FL called the out-of-distribution (OOD) backdoor attack ($\mathtt{OBA}$), which uses OOD data as both poisoned samples and triggers simultaneously. Our approach significantly broadens the scope of backdoor attack scenarios in FL. To improve the stealthiness of $\mathtt{OBA}$, we propose $\mathtt{SoDa}$, which regularizes both the magnitude and direction of malicious local models during local training, aligning them closely with their benign versions to evade detection. Empirical results demonstrate that $\mathtt{OBA}$ effectively circumvents state-of-the-art defenses while maintaining high accuracy on the main task.
To address this security vulnerability in the FL system, we introduce $\mathtt{BNGuard}$, a new server-side defense method tailored against $\mathtt{SoDa}$. $\mathtt{BNGuard}$ leverages the observation that OOD data causes significant deviations in the running statistics of batch normalization layers. This allows $\mathtt{BNGuard}$ to identify malicious model updates and exclude them from aggregation, thereby enhancing the backdoor robustness of FL. Extensive experiments across various settings show the effectiveness of $\mathtt{BNGuard}$ on defending against $\mathtt{SoDa}$. The code is available at \url{https://github.com/JiiahaoXU/SoDa-BNGuard}.
Paper
Surojit Ganguli, Zeyu Zhou, Christopher Brinton, David I. Inouye (Purdue University)
Abstract: Many intelligence tasks operate over networks where observations are vertically split across devices, necessitating collaborative inference. Existing collaborative learning approaches, such
as Vertical Federated Learning (VFL), typically implicitly assume the existence of an architecture that is reasonably fault-tolerant, e.g., a star topology to an aggregator node that never fails. However, in practice, device networks may be decentralized and possess dynamic connectivity, making them susceptible to catastrophic faults (e.g., environmental disruptions, extreme weather). In this work, we study the problem of enabling robust collaborative inference over these decentralized, dynamic, and fault-prone networks. We first formulate the impact of faults on collaborative inference through a notion of dynamic risk for the data and network context. Then, we develop Multiple Aggregation with Gossip Rounds and Simulated Faults (MAGS) which synthesizes three features to enhance fault tolerance during inference: (i) fault simulation via dropout in training, (ii) replication of aggregators across devices, and (iii) gossip layers to produce an ensemble inference. We provide theoretical insights into why each of these components enhances robustness, e.g., proving that the gossip protocol reduces dynamic risk according to prediction diversity. We conduct extensive evaluations over five datasets and different network configurations, which validate that MAGS substantially improves robustness over VFL baselines. The code is available at: https://github.com/inouye-lab/MAGS_Distributed_Robust_Learning
12:00 - 13:30
Lunch
(Commons)
Abstract: As we stand on the brink of a new era in wireless technology, the advent of 6G promises to revolutionize the way we interact with the digital world. In this presentation, we will discuss the transformative potential of 6G technology and explore how 6G is set to empower next-generation user experiences and services on an unprecedented scale.
We will address the need for continued wireless evolution to enable integration of AI-enhanced user interfaces, consumer behaviors driven by AI, and the role of AI agents in driving network connectivity, along with the resulting architectural advancements in 6G, focusing on its ability to connect an expanded set of AI-powered devices and enable new services. We will also examine the role of 6G in leveraging on-device AI to infer and share real-time context with the network enabling real-time digital twin platforms, thus providing insights into the future of wireless technology and its potential to shape the digital landscape.
Join us as we explore the opportunities for AI and wireless technologies in 6G to reduce operational expenses, enhance network efficiency, and enable new services and user experiences.
Bio: Dr. Kiran Mukkavilli is Senior Director of Engineering at Qualcomm Wireless Research, where he leads systems research for 6G focusing on Giga MIMO, Full Duplex, Sensing, IoT and Location Technologies. Since joining Qualcomm in 2003, Dr. Mukkavilli has contributed to the design, standardization, and commercialization of numerous wireless technologies, and holds over 570 granted U.S. patents. He led Qualcomm’s research and prototyping effort to establish industry’s first end-to-end 3GPP specification compliant 5G NR call with leading infrastructure vendors. He was also a principal architect of MediaFLO, Qualcomm’s mobile broadcast solution, playing a pivotal role in its product development and global rollout. As systems design lead, he also drove the commercialization of the UMTS modem in Snapdragon 800/801 platforms – one of the most successful products for Qualcomm. He has been profiled as one of the notable inventors at Qualcomm. Dr. Mukkavilli holds a Ph.D. and M.S. in Electrical Engineering from Rice University, and a B.Tech in Electrical Engineering from the Indian Institute of Technology, Madras.
14:30 - 14:45
Changeover Break
14:45 - 16:00
Session 6: Edge Inference
Session Chair: I-Hong Hou
Room: Hudspeth Auditorium
Paper
Liangqi Yuan (Purdue University); Dong-Jun Han (Yonsei University); Shiqiang Wang (IBM Research); Christopher Brinton (Purdue University)
Abstract: Compared to traditional machine learning models, recent large language models (LLMs) can exhibit multi-task-solving capabilities through multiple dialogues and multi-modal data sources. These unique characteristics of LLMs, together with their large model size, make their deployment more challenging. Specifically, (i) deploying LLMs on local devices faces computational, memory, and energy resource issues, while (ii) deploying them in the cloud cannot guarantee real-time service and incurs communication/usage costs. In this paper, we design TMO, a local-cloud LLM inference system with Three-M Offloading: Multi-modal, Multi-task, and Multi-dialogue. TMO incorporates (i) a lightweight local LLM that can process simple tasks at high speed and (ii) a large-scale cloud LLM that can handle multi-modal data sources. We develop a resource-constrained reinforcement learning (RCRL) strategy for TMO that optimizes the inference location (i.e., local vs. cloud) and multi-modal data sources to use for each task/dialogue, aiming to maximize the long-term reward (response quality, latency, and usage cost) while adhering to resource constraints. We also contribute M4A1, a new dataset we curated that contains reward and cost metrics across multiple modality, task, dialogue, and LLM configurations, enabling evaluation of offloading decisions. We demonstrate the effectiveness of TMO compared to several exploration-decision and LLM-as-Agent baselines, showing significant improvements in latency, cost, and response quality.
Paper
Shiva Saxena, Ben Liang (University of Toronto)
Abstract: We consider the partitioning of a deep neural network (DNN) inference job and offloading part of it from a resource-constrained device to a resource-rich server. The inference job is required to finish within a delay constraint, but it is allowed to perform early exit at some intermediate layer of the DNN, at the cost of lower accuracy. Since in practice both the processing delay and the communication delay of offloading usually are unknown ahead of time, this is naturally modelled as an online delay-constrained accuracy maximization problem. We propose Accuracy-Optimal Delay Constrained Online Partitioning (AODPart), a lightweight online algorithm that uses an adaptive thresholding strategy to solve the offloading problem. We derive the competitive ratio for AODPart and show that it is optimal in the sense that no other online algorithm can achieve a lower deterministic competitive ratio. Furthermore, we show that AODPart is robust and provides worst-case performance guarantee even with parameter estimation error. Through experimenting with common vision and language learning models, we demonstrate that AODPart substantially outperforms state-of-the-art alternatives and returns near optimal accuracy in practice.
Paper
Shangyu Liu, Zhenzhe Zheng (Shanghai Jiao Tong University); Xiaoyao Huang (Cloud Computing Research Institute, China Telecom); Fan Wu, Guihai Chen (Shanghai Jiao Tong University); Jie Wu (Temple University; Cloud Computing Research Institute, China Telecom)
Abstract: Small language models (SLMs) support efficient deployments on resource-constrained edge devices, but their limited capacity compromises inference performance. Retrieval-augmented generation (RAG) is a promising solution to enhance model performance by integrating external databases, without requiring intensive on-device model retraining. However, large-scale public databases and user-specific private contextual documents are typically located on the cloud and the device, respectively, while existing RAG implementations are primarily centralized. To bridge this gap, we propose DRAGON, a distributed RAG framework to enhance on-device SLMs through both general and personal knowledge without the risk of leaking document privacy. Specifically, DRAGON decomposes multi-document RAG into multiple parallel token generation processes performed independently and locally on the cloud and the device, and employs a newly designed Speculative Aggregation, a dual-side speculative algorithm to avoid frequent output synchronization between the cloud and device. A new scheduling algorithm is further introduced to identify the optimal aggregation side based on real-time network conditions. Evaluations on real-world hardware testbed demonstrate a significant performance improvement of DRAGON—up to $1.9\times$ greater gains over standalone SLM compared to the centralized RAG, substantial reduction in per-token latency, and negligible Time to First Token (TTFT) overhead.
16:00 - 16:30
Coffee Break
(Commons)
16:30 - 18:00
Session 7: Learning for Networking
Session Chair: Arnob Ghosh
Room: Hudspeth Auditorium
Paper
Tanzil Hassan (Northeastern University); Francesca Meneghello (Northeastern University); Francesco Restuccia (Northeastern University)
Abstract: While artificial intelligence (AI) is improving the performance of O-RAN, it will also expose the network to adversarial machine learning (AML) attacks. For this reason, in this paper, we are the first to investigate AML in the context of deep reinforcement learning (DRL)-based O-RAN xApps. What separates AML in O-RAN from traditional settings is the need to design and analyze adversarial attacks based on RAN-specific Key Performance Measures (KPMs) such as transmitted bit rate, downlink buffer occupancy, transmitted packets, etc. As such, we propose the AdvO-RAN framework, which includes (i) a new adversarial perturbation generator using preference-based reinforcement learning (PbRL) to learn the perturbation that most violate the user service level agreements (SLA) and (ii) a robust training module for enhancing DRL agent resilience to the attacks in (i). We experimentally evaluate AdvO-RAN on the Colosseum network emulator. Experimental results show that AdvO-RAN can enhance xApp performance by reducing SLA violations from 44% to 27% on average and reducing by 46% the latency under the most challenging attack scenario for Ultra-Reliable Low-Latency Communications (URLLC) traffic. AdvO-RAN can improve up to 75% of throughput for the victim Enhanced Mobile Broadband (eMBB) slice users during a constant bit-rate traffic scenario.
Paper
Myeung Suk Oh, Zhiyao Zhang (The Ohio State University); FNU Hairi (University of Wisconsin-Whitewater); Alvaro Velasquez (University of Colorado Boulder); Jia Liu (The Ohio State University)
Abstract: With wireless devices increasingly forming a unified smart network for seamless, user-friendly operations, random access (RA) medium access control (MAC) design is considered a key solution for handling unpredictable data traffic from multiple terminals. However, it remains challenging to design an effective RA-based MAC protocol to minimize collisions and ensure transmission fairness across the devices. While existing multi-agent reinforcement learning (MARL) approaches with centralized training and decentralized execution (CTDE) have been proposed to optimize RA performance, their reliance on centralized training and the significant overhead required for information collection can make real-world applications unrealistic. In this work, we adopt a fully decentralized MARL architecture, where policy learning does not rely on centralized tasks but leverages consensus-based information exchanges across devices. We design our MARL algorithm over an actor-critic (AC) network and propose exchanging only local rewards to minimize communication overhead. Furthermore, we provide a theoretical proof of convergence for our approach. Numerical experiments show that our proposed MARL algorithm can significantly improve RA network performance compared to other baselines.
Paper
Jerrod Wigmore, Eytan Modiano (MIT)
Abstract: We propose a novel Graph Neural Network (GNN) architecture and training framework for optimizing multi-class resource allocation in stochastic queueing networks. Our goal is to minimize metrics such as average delay, worst-case delay, and power consumption across diverse network topologies, traffic patterns, and link characteristics. Traditional GNNs struggle to model inter-class dependencies while preserving permutation invariance—a key property for generalization. To address this, we introduce Multi-Axis GNNs (MA-GNNs), which augment message passing with structured, permutation-invariant information exchange across traffic classes at each node. This enables reasoning over both spatial and inter-class dependencies via matrix-valued node and edge features. We train MA-GNNs using a Network Performance Gradient Algorithm—a model-free reinforcement learning method that directly optimizes routing coefficients for both state-independent and state-dependent policies, without requiring expert supervision. Experiments on diverse multi-hop, multi-commodity routing environments demonstrate that MA-GNNs consistently outperform classical baselines such as shortest-path and backpressure routing, while generalizing effectively to unseen networks.
Paper
Shugang Hao (Singapore University of Technology and Design); Hongbo Li (The Ohio State University); Lingjie Duan (Singapore University of Technology and Design; Hong Kong University of Science and Technology (Guangzhou))
Abstract: The binary exponential backoff scheme is widely used in WiFi 7 and still incurs poor throughput performance under dynamic channel environments. Recent model-based approaches (e.g., non-persistent and $p$-persistent CSMA) simply optimize backoff strategies under a known and fixed node density, still leading to a large throughput loss due to inaccurate node density estimation. This paper is the first to propose LLM transformer-based in-context learning (ICL) theory for optimizing channel access. We design a transformer-based ICL optimizer to pre-collect collision-threshold data examples and a query collision case. They are constructed as a prompt as the input for the transformer to learn the pattern, which then generates a predicted contention window threshold (CWT). To train the transformer for effective ICL, we develop an efficient algorithm and guarantee a near-optimal CWT prediction within limited training steps. As it may be hard to gather perfect data examples for ICL in practice, we further extend to allow erroneous data input in the prompt. We prove that our optimizer maintains minimal prediction and throughput deviations from the optimal values. Experimental results on NS-3 further demonstrate our approach's advantage.
Room: Valhalla, Rice's Graduate Student Pub (Direction)
Wednesday, Oct 29, 2025
08:00 - 17:00
Registration
(Anderson-Clarke Center)
08:30 - 10:00
Session 8: Age of Information
Session Chair: Yin Sun
Room: Hudspeth Auditorium
Paper
Mohamed A. Abd-Elmagid (The Ohio State University); Ming Shi (The State University of New York at Buffalo); Eylem Ekici, Ness B. Shroff (The Ohio State University)
Abstract: We consider a real-time monitoring system where a source node (with energy limitations) aims to keep the information status at a destination node as fresh as possible by scheduling status update transmissions over a set of channels. The freshness of information at the destination node is measured in terms of the Age of Information (AoI) metric. In this setting, a natural tradeoff exists between the transmission cost (or equivalently, energy consumption) of the source and the achievable AoI performance at the destination. This tradeoff has been optimized in the existing literature under the assumption of having a complete knowledge of the channel statistics. In this work, we develop online learning-based algorithms with finite-time guarantees that optimize this tradeoff in the practical scenario where the channel statistics are unknown to the scheduler. In particular, when the channel statistics are known, the optimal scheduling policy is first proven to have a threshold-based structure with respect to the value of AoI (i.e., it is optimal to drop updates when the AoI value is below some threshold). This key insight was then utilized to develop the proposed learning algorithms that surprisingly achieve an order-optimal regret (i.e., $O(1)$) with respect to the time horizon length.
Paper
Lin Wang, I-Hong Hou (Texas A&M University)
Abstract: This paper characterizes the fundamental trade-off between throughput and Age of Information (AoI) in wireless networks where multiple devices transmit status updates to a central base station over unreliable channels. To address the complexity introduced by stochastic transmission successes, we propose the throughput-AoI capacity region, which defines all feasible throughput-AoI pairs achievable under any scheduling policy. Using a second-order approximation that incorporates both mean and temporal variance, we derive an outer bound and a tight inner bound for the throughput-AoI capacity region. Furthermore, we propose a simple and low complexity scheduling policy and prove that it achieves every interior point within the tight inner bound. This establishes a systematic and theoretically grounded framework for the joint optimization of throughput and information freshness in practical wireless communication scenarios.
To validate our theoretical framework and demonstrate the utility of the throughput-AoI capacity region, extensive simulations are implemented. Simulation results demonstrate that our proposed policy significantly outperforms conventional methods across various practical network optimization scenarios. The findings highlight our approach's effectiveness in optimizing both throughput and AoI, underscoring its applicability and robustness in practical wireless networks.
Paper
Adem Utku Atasayar, Aimin Li, Cagri Ali, Elif Uysal (Middle East Technical University)
Abstract: In this paper, we extend the \textit{freshness}-oriented sampling problem by incorporating controlled delay statistics through \textit{heterogeneous routing options}, where Age of Information (AoI) serves as the metric for data \textit{freshness}. Our objective is to jointly optimize routing and sampling policies to minimize the \textit{long-term average} AoI, where the sender can choose to forward each status update over one of the available routes, which have varying delay statistics. We formulate this optimization as an infinite-horizon Semi-Markov Decision Process (SMDP) with an \textit{uncountable} state space and a \textit{hybrid} action space, consisting of discrete routing choices and continuous waiting times. We develop an efficient algorithm to solve this problem and theoretically establish that the optimal policy exhibits a threshold structure, characterized by: (i) a threshold-based \textit{monotonic handover mechanism} for optimal routing, where the switching order aligns with the decreasing order of mean delays; and (ii) a multi-threshold \textit{piecewise linear waiting mechanism} for optimal sampling, where the total number of thresholds is upper bounded by $2N-1$, given $N$ selectable routes. We implement the proposed algorithm in a \textit{satellite-terrestrial} integrated routing scenario, and simulation results reveal an intriguing insight: \textbf{routes with higher average delay and variance can still contribute to minimizing AoI}.
Paper
Chengzhang Li (The Ohio State University); Peizhong Ju (University of Kentucky); Atilla Eryilmaz, Ness Shroff (The Ohio State University)
Abstract: The challenge of classification at the network edge is that due to limited computational resources, the edge must transmit the data to a server for processing. However, the communication constraints at the edge necessitate that these devices compress data before transmission. The question this paper aims to answer is how to efficiently compress and transmit this information in order to achieve timely and accurate edge classification. To that end, we develop scheduling algorithms that optimize age of information (AoI) and classification accuracy. Our analysis reveals that in scenarios with multiple available compression levels, an algorithm that selects at most two compression levels can achieve good theoretical performance guarantees. Numerical results indicate that double-level compression algorithms yield near-optimal performance, suggesting that for many classification tasks, numerous compression levels are unnecessary---only two are sufficient, significantly reducing the storage demands on devices and simplifying the overall system design.
10:00 - 10:30
Coffee Break
(Commons)
10:30 - 12:00
Session 9: Incentives & Peer Systems
Session Chair: Jian Li
Room: Hudspeth Auditorium
Paper
Yumou Liu (Shanghai Jiao Tong University, The Chinese University of Hong Kong, Shenzhen); Zhenzhe Zheng, Fan Wu, Guihai Chen (Shanghai Jiao Tong University)
Abstract: In creator economy, the online platform posts contract to incentivize creators for high-quality content. The creators can use human effort or Generative Artificial Intelligence (GenAI) to assist them to create content for rewards. In this work, we identify that the existing contract suffers from GenAI-Induced Effort Degradation, where creators may not try their best to create high-quality content, but instead turn to GenAI to save effort and extract additional reward.
Our results show that GenAI-induced effort degradation arises from failing to provide the same profit for the creators generating the same quality content. With this observation, we design the contract for creator economy in the era of GenAI as a classical contract for human content and an estimation of the human effort cost required to achieve GenAI's content quality. To overcome the challenge of maximizing platform profit with high-quality content guarantee, we propose a bandit-based online learning approach with a regret of $O(T^{\frac{d}{d+1}})$. To alleviate the curse of dimensionality, we design an adaptive action pruning method that gradually filters out contracts resulting in low quality. Simulation experiments on synthetic data demonstrate the effectiveness of our approach.
Paper
Yunqi Zhang, Shaileshh Bojja Venkatakrishnan (The Ohio State University)
Abstract: Popular blockchains today have hundreds of thousands of nodes and need to be able to support sophisticated scaling solutions—such as sharding, data availability sampling, and layer-2 methods. Designing secure and efficient peer-to-peer (p2p) networking protocols at these scales to support the tight demands of the upper layer crypto-economic primitives is a highly non-trivial endeavor. We identify decentralized, uniform random sampling of nodes as a fundamental capability necessary for building robust p2p networks in emerging blockchain networks. Sampling algorithms used in practice today (primarily for address discovery) rely on either distributed hash tables (e.g., Kademlia) or sharing addresses with neighbors (e.g., GossipSub), and are not secure in a Sybil setting. We present Honeybee, a decentralized algorithm for sampling nodes that uses verifiable random walks and table consistency checks. Honeybee is secure against attacks even in the presence of an overwhelming number of Byzantine nodes (e.g., $\geq$ 50% of the network). We evaluate Honeybee through experiments and show that the quality of sampling achieved by Honeybee is significantly better compared to the state-of-the-art. Our proposed algorithm has implications for network design in both full nodes and light nodes.
Paper
Yiting Hu, Lingjie Duan (Singapore University of Technology and Design)
Abstract: Information sharing platforms like TripAdvisor and Waze involve human agents as both information producers and consumers. All these platforms operate in a centralized way to collect agents' latest observations of new options (e.g., restaurants, hotels, travel routes) and share such information with all in real time. However, after hearing the central platforms' live updates, many human agents are found selfish and unwilling to further explore unknown options for the benefit of others in the long run. To regulate the human-in-the-loop learning (HILL) game against selfish agents' free-riding, this paper proposes a paradigm shift from centralized to decentralized way of operation that forces agents' local explorations through restricting information sharing. When game theory meets distributed learning, we formulate our decentralized communication mechanism's design as a new multi-agent Markov decision process (MA-MDP), and derive its analytical condition to outperform today's centralized operation. As the optimal decentralized communication mechanism in MA-MDP is NP-hard to solve, we present an asymptotically optimal algorithm with linear complexity to determine the mechanism's timing of intermittent information sharing. Then we turn to non-myopic agents who may reverse to even over-explore, and adapt our mechanism design to work. Simulation experiments using real-world dataset demonstrate the effectiveness of our decentralized mechanisms for various scenarios.
Paper
Xiaoyi Wu, Juaren Steiger, Bin Li (Pennsylvania State University); R. Srikant (UIUC)
Abstract: Immersive technologies, such as virtual and augmented reality, demand high framerate, low latency, and precise synchronization between real and virtual environments. To meet these requirements, an edge server typically needs to perform high-quality rendering, and must predict user head motion and transmit a portion of the rendered panoramic scene that is large enough to cover the user's viewport, yet small enough to satisfy bandwidth constraints. Each portion yields two feedback signals: prediction feedback, indicating whether the selected portion covers the actual viewport, and transmission feedback, indicating whether all data packets are successfully delivered. While prior work models this setting as a multi-armed bandit with two-level bandit feedback, it overlooks that prediction feedback can be retrospectively computed for all possible portions, thus providing full-information feedback. In this work, we introduce a new two-level feedback model that combines full-information feedback with bandit feedback, and we formulate the portion selection problem as an online learning task under this hybrid setting. We derive an instance-dependent regret lower bound for this new hybrid feedback setting, and we propose AdaPort, a hybrid learning algorithm that leverages both the full-information feedback and bandit feedback to improve learning efficiency. We then show that the instance-dependent regret upper bound for AdaPort matches the lower bound asymptotically, proving its asymptotic optimality. Simulations using synthetic data and real-world traces demonstrate that AdaPort consistently outperforms state-of-the-art baselines, validating the benefits of exploiting the hybrid feedback structure.
12:00 - 12:30
Lunch (Pick up lunch boxes and walk over to Moody)
(Commons)
12:30 - 13:30
Poster/Demo Session
Room: Moody Central Gallery
Paper
Eduardo Baena, Ankita Mandal, Dimitrios Koutsonikolas
Abstract: We demonstrate a novel agentic control architecture for dynamic edge network reconfiguration, featuring a dual-Model Context Protocol (MCP) deployment, one instance at the gNodeB and another at the User Equipment (UE), coordinated via a shared client interface. A Large Language Model (LLM) receives high-level user intent (e.g., "improve video call stability"), and constructs a semantic action plan conditioned on real-time context and validated operational constraints. The system enables collaborative reasoning across the UE and RAN components, with human-in-the-loop authorization governing execution. Unlike conventional APIs or policy engines, MCP supports reversibility, semantic validation, and learning from feedback. This demonstration highlights the potential for intent-driven, context-aware, verifiable autonomy in edge wireless systems.
Paper
Harim Kang, Changhee Joo
Abstract: We present a practical in-network compression system for multimedia delivery under dynamic wireless conditions at the network edge. Our system leverages Dynamic Multi-Level AutoEncoder (DMAE), allowing edge nodes to apply additional compression in response to local congestion. This enables runtime, feedback-free adaptive compression, ensuring timely delivery of multimedia traffic. We demonstrate the system's feasibility on a programmable three-node testbed.
Paper
Zongshen Wu, Rahman Doost-Mohammady, Ashutosh Sabharwal
Abstract: Radio Access Networks (RANs) are increasingly softwarized and disaggregated, resulting in diverse and interoperable components running on commodity hardware and compute platforms. In this demo, we present a real-time RAN testing framework, called ETHOS, designed to capture fine-grained metrics on performance, energy consumption, and computational efficiency for O-RAN-compliant systems across a range of compute platforms. ETHOS also interfaces with the RAN stack for real-time controlling to test various scheduling algorithms. ETHOS leverages a shared memory-based inter-process communication (IPC) framework and automates testing through the emulation of diverse user traffic and channel conditions. We deploy ETHOS on the NVIDIA ARC-OTA platform in this demonstration to show its effectiveness. ETHOS is designed
to be easily portable to other commercial and open-source RAN software, offering the potential for a unified testing and evaluation framework to accelerate O-RAN research and innovation.
Abstract: We present a passive reconfigurable intelligent surface (RIS)-assisted beamforming system operating at 900 MHz, enhanced with vision-based environmental sensing for real-time adaptation of the reflected beam direction. In our setup, the RIS enables signals from a given transmitter to reach a mobile user by setting the appropriate weights for the RIS elements in real time. To handle dynamic conditions, we deploy an 8-bit Integer (INT8) quantized Convolutional Neural Network (CNN) at the receiver smartphone that utilizes images from the camera to infer the optimal RIS beam index given the transient angular difference between the RIS and the smartphone. By using such an out-of-modality image sensor, we avoid an exhaustive search through the entire codebook of beams. We demonstrate the system on Qualcomm's Snapdragon 8 Gen 3 mobile platform hardware with CNN optimization through the open source AI Model Efficiency Toolkit (AIMET). Our demo reveals that carefully optimized machine learning models at the network edge can enable low-latency and computationally efficient RIS beamforming for practical wireless deployments.
Paper
Peter Doichinov, Ram M. Narayanan, Ambrose Kam
Abstract: The fog node placement problem has become an important topic in the field of the internet of things. In this paper, we explore a specific scenario where all fog nodes (FNs) are connected to a base station through a minimum spanning tree and we assume they all have the same radius of coverage. We also assume each FN has the capacity to process all the data coming from the end users (EUs) within the radius of coverage. We evaluate the effectiveness of difference placement algorithms using information elasticity concepts.
Paper
Can Cui, Seyhan Ucar, Yongkang Liu, Ahmadreza Moradipari, Akin Sisbot, Kentaro Oguchi
Abstract: Modern cars monitor their surroundings and record video to deter intruders, but current surveillance systems operate independently without communication. Connected vehicles can share detected features of suspicious individuals, improving tracking and alerting approaching drivers before they park in vulnerable spots. This paper explores this use case, where connected vehicles exchange features over the network upon detecting suspicious individuals. Real-world data analysis shows that connected vehicle surveillance improves detection accuracy by 38%.
Paper
Seyhan Ucar, Mohammad Irfan Khan, Onur Altintas
Abstract: Connected vehicles use Vehicle-to-Everything (V2X) communication to improve situational awareness. However, transmitting raw or irrelevant data may congest the communication channel, and limit the scalability and effectiveness of safety applications. This paper introduces the Context-Aware V2X concept, which assesses message relevance and adds contextual meaning before transmission, ensuring that only essential information reaches relevant receivers. We demonstrate the effectiveness of the Context-Aware V2X concept through simulations of the tailgating scenario, where results show that Context-Aware V2X can improve driver awareness and reduce risks associated with tailgating.
Paper
Xingqi Wu, Ritesh Honnalli, Junaid Farooq
Abstract: User equipments ({UE}s) sharing a RAN slice can generate bursty or unpredictable traffic that degrades the quality of service ({QoS}) for other slice members. We present triage slicing, a control mechanism that detects disruptive {UE}s using an online long short term memory ({LSTM}) predictor and migrates them to a monitored slice with constrained, risk proportional resources. Implemented on the Open {AI} Cellular ({OAIC}) testbed, triage slicing rapidly contains disruptive {UE}s, preserves benign {QoS}, and incurs minimal control overhead.
Paper
Negar Erfaniantaghvayi, Zhongyuan Zhao, Kevin Chan, Ananthram Swami, Santiago Segarra
Abstract: This paper proposes an efficient approach to joint task offloading and routing for real-time sensor data analytics at the network edge, enabling applications such as video surveillance and environmental monitoring.
This problem can be formulated as a mixed-integer program (MIP) with the objective of utility maximization subject to the constraints of network topology, limited link capacity, and diverse task profiles.
To efficiently approximate this NP-hard problem, we propose SeLR, a combination of primal-dual optimization and reweighted $L_1$-norm regularization, which iteratively solves the convex relaxation while penalizing constraint violations and encouraging sparsity.
Compared to greedy heuristics, SeLR provides a better accuracy–latency trade-off and better scalability to larger problems.
Moreover, it reduces scheduling runtime by up to 9.17$\times$ over optimal solvers in networks with $300$ nodes and 100 tasks.
Paper
Qixuan Zai, Yaxuan Liu, Randall Berry
Abstract: This paper considers a setting where a band of spectrum is shared using time/frequency prioritized sharing, an approach in which one entity has priority access to a portion of the band for a portion of time, while other entities equally share the remaining time/frequency resources. We consider a setting in which all of the entities sharing the spectrum are wireless service providers (SPs) that are competing for a common pool of customers. Most prior work on competition between wireless SPs has adopted either a Bertrand competition model, where SPs compete by announcing prices, or a Cournot competition model where SPs compete by announcing quantities. Here, we compare these two frameworks for competition with time/frequency prioritized sharing. We perform both theoretical and numerical analysis on how market parameters affect the two economic models.
13:30 - 13:45
Poster/Demo wrapup and Panel Setup
13:45 - 15:00
Panel: Emerging Verticals & Breakthrough Technologies in Wireless
Session Chair: Srinivas Shakkotai
Room: Moody Studio Theater
Bio: Jiasi Chen is an Associate Professor of Electrical Engineering and Computer Science at the University of Michigan, Ann Arbor. She received her Ph.D. from Princeton University and her B.S. from Columbia University. Her research focus is on multimedia systems, mobile computing, and extended reality and its security. Her projects typically involve mathematical optimization coupled with systems implementation. She is a recipient of an NSF CAREER award and industry gifts from Meta, Adobe, and AT&T.
Bio: Dr. Seyhan Ucar is a Principal Researcher in the Advanced Development and Planning group at InfoTech Labs, Toyota Motor North America R&D. He received his B.Sc., M.Sc., and Ph.D. degrees in Computer Science and Engineering. His academic research focused on vehicular wireless communications, including Dedicated Short-Range Communications (DSRC), IEEE 802.11-based networking, and Visible Light Communication. Dr. Ucar is the inventor of more than 70 patents and the author of numerous conference and journal papers, including several demonstration papers. His current research interests lie in vehicular cloud and edge intelligence, Semantic V2X, and large-scale connected vehicle systems. At Toyota, he leads efforts on next-generation intelligent transportation systems, emphasizing distributed reasoning, network-efficient collaboration, and the integration of AI-driven wireless technologies for mobility applications.
Bio: Rajat Prakash is with the Wireless R&D group at Qualcomm. His current work focuses on construction and applications of digital twins of wireless networks. He also works on Open RAN and industrial IoT for 5G. Rajat has previously worked on a wide range of topics in wireless networks, including small cells, self-organizing networks, neutral hosting, VoLTE and VoWiFi. He has participated in standards and industry bodies such as 3GPP, O-RAN Alliance, CBRS Alliance, Multefire Alliance, Small Cell Forum, 5G ACIA, IEEE and 3GPP2. Rajat obtained his PhD from the University of Illinois at Urbana-Champaign, MS from Cornell University and B.Tech from the Indian Institute of Technology, Kanpur, all in Electrical Engineering.
Bio: Christopher G. Brinton is the Elmore Associate Professor of Electrical and Computer Engineering (ECE) at Purdue University. His research interest is at the intersection of networking, communications, and machine learning, specifically in fog/edge network intelligence, distributed machine learning, and AI/ML-inspired wireless network optimization. Dr. Brinton is a recipient of five of the US top early career awards, from the National Science Foundation (CAREER), Office of Naval Research (YIP), Defense Advanced Research Projects Agency (YFA and Director’s Fellowship), and Air Force Office of Scientific Research (YIP), the IEEE Communication Society William Bennett Prize Best Paper Award, the Intel Rising Star Faculty Award, the Qualcomm Faculty Award, and roughly $30M in sponsored research projects as a PI or co-PI. Prior to joining Purdue, Dr. Brinton was the Associate Director of the EDGE Lab and a Lecturer of Electrical Engineering at Princeton University. He also co-founded Zoomi Inc., a big data startup company that holds US Patents in machine learning for education. Dr. Brinton received the PhD (with honors) and MS Degrees from Princeton in 2016 and 2013, respectively, both in Electrical Engineering.
15:00 - 15:30
Coffee Break
(Commons)
15:30- 17:00
Session 10: Trust & Privacy
Session Chair: Bin Li
Room: Hudspeth Auditorium
Paper
Heiko Kiesel, Jiska Classen (Hasso Plattner Institute, University of Potsdam)
Abstract: Internet browsing exposes private user information to websites and intermediate network providers, such as geographically linkable IP addresses and browsing history. Privacy proxies aim to address this internet privacy problem at scale. They are affordable for a large user base and do not compromise performance or usability—with a trend towards enabling them by default. Due to network encryption and closed-source code, their privacy promises cannot be verified using passive observations. We are the first to analyze the client-side implementations of three prominent privacy proxies available to over a billion users: Apple’s iCloud Private Relay, Google’s IP Protection, and Microsoft’s Edge Secure Network VPN. We develop a deep understanding of these systems by reverse-engineering closed-source components and creating a client to bring Apple’s, Google’s, and Microsoft’s solutions to Firefox on Linux. While privacy proxies offer privacy-enhancing features, we identify multiple issues deeply anchored in their architecture that weaken user privacy and security promises across all solutions. Our responsible disclosure and design recommendations aim to further strengthen this novel technology.
Paper
Jiaqi Shao (Duke Kunshan University); Tao Lin (Westlake University); Xiaojin Zhang (Huazhong University of Science and Technology); Qiang Yang (Hong Kong Polytechnic University); Bing Luo (Duke Kunshan University)
Abstract: Federated Unlearning (FU) enables the removal of specific clients' data influence from trained models.
However, in non-IID settings, removing clients creates critical side effects:
remaining clients with similar data distributions suffer disproportionate performance degradation,
while the global model's stability deteriorates.
These vulnerable clients then have reduced incentives to stay in the federation,
potentially triggering a cascade of withdrawals that further destabilizes the system.
To address this challenge, we develop a theoretical framework that quantifies how data heterogeneity impacts unlearning outcomes.
Based on these insights, we model FU as a Stackelberg game where the server strategically offers payments to retain crucial clients
based on their contribution to both unlearning effectiveness and system stability.
Our rigorous equilibrium analysis reveals how data heterogeneity fundamentally shapes the trade-offs between system-wide objectives and client interests.
Our approach improves global stability by up to 6.23\%,
reduces worst-case client degradation by 10.05\%,
and achieves up to 38.6\% runtime efficiency over complete retraining.
Paper
Yueyang Quan (University of North Texas); Chang Wang, Shengjie Zhai (University of Nevada, Las Vegas); Minghong Fang (University of Louisville); Zhuqing Liu (University of North Texas)
Abstract: Decentralized min-max optimization allows multi-agent systems to collaboratively solve global min-max optimization problems by facilitating the exchange of model updates among neighboring agents, eliminating the need for a central server.
However, sharing model updates in such systems carry a risk of exposing sensitive data to inference attacks, raising significant privacy concerns. To mitigate these privacy risks, differential privacy (DP) has become a widely adopted technique for safeguarding individual data.
Despite its advantages, implementing DP in decentralized min-max optimization poses challenges, as the added noise can hinder convergence, particularly in non-convex scenarios with complex agent interactions in min-max optimization problems. In this work, we propose an algorithm called DPMixSGD (\underline{D}ifferential \underline{P}rivate \underline{Mi}nma\underline{x} Hybrid \underline{S}tochastic \underline{G}radient \underline{D}escent), a novel privacy-preserving algorithm specifically designed for non-convex decentralized min-max optimization. Our method builds on the state-of-the-art STORM-based algorithm, one of the fastest decentralized min-max solutions. We rigorously prove that the noise added to local gradients does not significantly compromise convergence performance, and we provide theoretical bounds to ensure privacy guarantees. To validate our theoretical findings, we conduct extensive experiments across various tasks and models, demonstrating the effectiveness of our approach.
Paper
Sage Trudeau (University of Texas at Austin, MIT Lincoln Laboratory); Kaushik Chowdhury (University of Texas at Austin)
Abstract: The dynamic nature of wireless environments presents significant challenges for machine learning (ML) models in real-world radio frequency applications, where impairments such as noise, fading, and frequency shifts disrupt performance. To address these challenges and build trust in ML models, we present Augmented Input Resilience Analysis (AURA), a test framework designed for IQ-based RF models to rigorously assess ML model performance by simulating RF impairments and identifying critical vulnerabilities. AURA systematically applies test-time augmentations to provide a detailed examination of model strengths and weaknesses. Key contributions include (1) Score‚ÄëCAM adapted to 1D IQ in time and a frequency‚Äëselective variant to localize spectral features, and (2) embedding similarity evaluation to quantify distribution shifts caused by impairments. By integrating these methods, AURA enhances interpretability, promoting trust in ML decision-making. We demonstrate AURA's utility in exposing critical vulnerabilities in well-cited models, such as over-reliance on power-based features, including instances where random noise is misclassified as a legitimate signal with 99.7% accuracy. AURA also evaluates remediation strategies, such as noise classes, which reduce misclassifications to less than 1% in the noise augmentation case. This framework aims to advance the design of trustworthy and resilient AI-driven systems for future RF ML technologies.
Thursday, Oct 30, 2025
8:00 - 17:00
Registration
(Anderson-Clarke Center)
Room: Anderson-Clark Center Room 107
Room: Anderson-Clark Center Room 115
Room: Anderson-Clark Center Room 110
Room: Anderson-Clark Center Room 108