Navigation

Accepted Papers

Abstract: This paper analyzes the performance of distributed Medium Access Control (MAC) protocols in ultra-dense multichannel wireless networks, where N frequency bands (or channels) are shared by M = mN devices, and devices make decisions to probe and then transmit over available frequency bands. While such a system can be formulated as an M-player Bayesian game, it is often infeasible to compute the Nash equilibria of a large-scale system due to the curse of dimensionality. In this paper, we exploit the Mean Field Game (MFG) approach and analyze the system in the large population regime (N tends to ∞ and m is a constant). We consider a distributed and low complexity MAC protocol where each device probes d/k channels by following an exponential clock which ticks with rate k when it has a message to transmit, and optimizes the probing strategy to balance throughput and probing cost. We present a comprehensive analysis from the MFG perspective, including the existence and uniqueness of the Mean Field Nash Equilibrium (MFNE), convergence to the MFNE, and the price of anarchy with respect to the global optimal solution. Our analysis shows that the price of anarchy is at most one half, but is close to zero when the traffic load or the probing cost is low. Our numerical results confirm our analysis and show that the MFNE is a good approximation of the M-player system. Besides showing the efficiency of the considered MAC for emerging applications in ultra-dense multi-channel wireless networks, this paper demonstrates the novelty of MFG analysis, which can be used to study other distributed MAC protocols in ultra-dense wireless networks.

How Bad is Selfish Caching?

Qian Ma and Edmund Yeh (Northeastern University); Jianwei Huang (The Chinese University of Hong Kong)

Abstract: Caching networks can reduce the routing costs of accessing contents by caching contents closer to users. However, cache nodes may belong to different entities and behave selfishly to maximize their own benefits, which often lead to performance degradation for the overall network. While there has been extensive literature on allocating contents to caches to maximize the social welfare, the analysis of selfish caching behaviors remains largely unexplored. In this paper, we model the selfish behaviors of cache nodes as selfish caching games on arbitrary directed graphs with heterogeneous content popularity. We study the existence of a pure strategy Nash equilibrium (PSNE) in selfish caching games, and analyze its efficiency in terms of social welfare. We show that a PSNE does not always exist in arbitrary-topology caching networks. However, if the network does not have a mixed request loop, i.e., a directed loop in which each edge is traversed by at least one content request, we show that a PSNE always exists and can be found in polynomial time. We then show that the efficiency of Nash equilibria, captured by the price of anarchy (PoA), can be arbitrarily poor if we allow arbitrary content request patterns. However, when cache nodes have homogeneous request patterns, we show that the PoA is bounded even allowing arbitrary topologies. We further analyze the selfish caching games for cache nodes with limited computational capabilities, and show that an approximate PSNE exists with bounded PoA in certain cases of interest. Simulation results show that increasing the cache capacity in the network improves the efficiency of Nash equilibria, while adding extra cache nodes can degrade the efficiency of Nash equilibria.

APP: Augmented Proactive Perception for Driving Hazards with Sparse GPS Trace

Siqian Yang and Cheng Wang (Tongji University); Hongzi Zhu (Shanghai Jiao Tong University); Changjun Jiang (Tongji University)

Abstract: Driving safety is a persistent concern for urban dwellers who spend hours driving on road in ordinary daily life. Traditional driving hazard detection solutions heavily rely on on-board sensors (e.g., front and rear radars, cameras) with limited sensing range. In this article, we propose a proactive hazard warning system, called APP, which aims to alert drivers when there are vehicles with dangerous behaviors nearby. To this end, APP incorporates several basic techniques (e.g, tensor decomposition, similarity comparison) to estimate fine-grained behavioral data of a driver based on sparse sampled GPS trace at first. Then, with the estimated unlabelled fine-grained data, potential dangerous behaviors of a particular vehicle are identified and recognized with a Gaussian Mixture Model (GMM) based approach. We have implemented and evaluated our system with a dataset collected for 30 days from over 13,676 taxicabs. Our method shows average 81% accuracy in potential dangerous behavior recognition.

DeMiLTE: Detecting and Mitigating LTE Interference for Enterprise Wi-Fi in 5 GHz

Swetank Kumar Saha (University at Buffalo, The State University of New York); Christina Vlachou (Hewlett Packard Labs); Dimitrios Koutsonikolas (University at Buffalo, The State University of New York); Kyu-Han Kim (Hewlett Packard Labs)

Abstract: LTE in unlicensed 5 GHz bands is being deployed by mobile operators for increased capacity. In this paper, we conduct an extensive measurement study with commodity LTE and Wi-Fi hardware to identify key coexistence challenges. Our study – the first to include a commercial LAA base station – confirms that LTE interference causes Wi-Fi performance to degrade, harming 802.11ac high-throughput features. We then present DeMiLTE – a system for commodity enterprise WiFi APs that detects, quantifies, and reacts to LTE interference. To our best knowledge, our solution is the first that achieves fair coexistence without modifying the LTE PHY/MAC, while still being fully-compliant to the 802.11ac standard. DeMiLTE’s architecture is based on lightweight per-link interference detection and enables Wi-Fi APs to mitigate LTE-induced performance degradation with minimal overhead. Our evaluation results show that DeMiLTE can provide up to 110% throughput gains and alleviate client disruption caused by LTE interference.

Minimizing Age-of-Information with Throughput Requirements in Multi-Path Network Communication

Qingyu Liu and Haibo Zeng (Dept. of ECE, Virginia Tech); Minghua Chen (Dept. of IE, The Chinese University of Hong Kong)

Abstract: We consider the scenario where a sender periodically sends a batch of data to a receiver over a multi-hop network, possibly using multiple paths. Our objective is to minimize peak/average Age-of-Information (AoI) subject to throughput requirements. The consideration of batch generation and multi-path communication differentiates us from existing studies. We first show that our AoI minimization problems are NP-hard, but only in the weak sense as we develop an optimal algorithm with a pseudo-polynomial time complexity. Next we prove that minimizing AoI and minimizing maximum delay for a batch of data are ``roughly" equivalent, in the sense that any optimal solution of minimizing maximum delay is an approximate solution of minimizing AoI with bounded optimality loss. We leverage this understanding and design a general approximation framework for our AoI minimization problems. It can adopt any polynomial-time \alpha-approximation algorithm of the existing maximum delay minimization problem to derive a polynomial-time algorithm for our AoI minimization problems with an approximation ratio of \alpha+c, where c is a constant determined by throughput requirements. The framework suggests a new avenue for designing approximation algorithms for minimizing AoI in multi-path communications. Extensive simulations over various network topologies validate the effectiveness of our approaches.

DeepRadioID: Real-Time Channel-Resilient Optimization of Deep Learning-based Radio Fingerprinting Algorithms

Francesco Restuccia, Salvatore D'Oro, Amani Al-Shawabka, Mauro Belgiovine, Luca Angioloni, Stratis Ioannidis, Kaushik Chowdhury, and Tommaso Melodia (Northeastern University)

Abstract: A core security challenge for the Internet of Things is to devise reliable and energy-efficient authentication techniques. Radio fingerprinting addresses this problem by leveraging the unique hardware-level imperfections imposed on the received wireless signal by the transmitter's radio circuitry, thus avoiding energy-expensive cryptography at the transmitter's side. Most of existing approaches utilize hand-tailored feature extraction techniques, which can identify devices operating under a specific wireless standard only and are not amenable to fingerprint a large population of devices. Conversely, by mapping inputs onto a very large feature space, deep learning algorithms can be trained to fingerprint large populations of wireless devices operating under any wireless standard. One of the most crucial challenges in radio fingerprinting is to counteract the action of the wireless channel, which decreases fingerprinting accuracy significantly by disrupting hardware impairments. On the other hand, due to their sheer size, deep learning algorithms are hardly re-trainable in real-time. Another aspect that is yet to be investigated is whether an adversary can successfully impersonate another device's fingerprint. To address these key issues, this paper proposes DeepRadioID, a system to optimize the accuracy of deep-learning-based radio fingerprinting algorithms \textit{without retraining the underlying deep learning model}. The key intuition is that through the application of a carefully-optimized digital finite input response filter (FIR) at the transmitter's side, we can apply tiny modifications to the waveform to strengthen its fingerprint according to the current channel conditions. We mathematically formulate the \textit{Waveform Optimization Problem} (WOP) as the problem of finding, for a given trained neural network, the optimum FIR to be used by the transmitter to improve its current fingerprinting accuracy. Moreover, since the FIR is tailored to the specific device's hardware imperfections, we show that an adversary will not be able to use the FIR to imitate a legitimate device's fingerprint. We extensively evaluate DeepRadioID on a experimental testbed of 20 nominally-identical software-defined radios, as well as on two datasets made up by 500 ADS-B devices and by 500 WiFi devices provided by the DARPA RFMLS program. Experimental results show that DeepRadioID (i) increases fingerprinting accuracy by about 35%, 50% and 58% on the three scenarios considered; (ii) decreases an adversary's accuracy by about 54% when trying to imitate other device's fingerprints by using their filters; (iii) achieves 27% improvement over the state of the art on a 100-device dataset.

Jam Sessions: Analysis and Experimental Evaluation of Advanced Jamming Attacks in MIMO Networks

Liyang Zhang, Francesco Restuccia, and Tommaso Melodia (Northeastern University); Scott M. Pudlewski (Air Force Research Laboratory)

Abstract: Research advances in wireless security have shown that advanced jamming can significantly decrease the performance of wireless communications. In advanced jamming, the adversary intentionally concentrates the available energy budget on specific critical components (\textit{e.g.}, pilot symbols, acknowledgement packets, etc.) to (i) increase the jamming effectiveness, as more targets can be jammed with the same energy budget; and (ii) decrease the likelihood of being detected, as the channel is jammed for a shorter period of time. These key aspects make advanced jamming very stealthy yet exceptionally effective in practical scenarios. One of the fundamental challenges in designing defense mechanisms against an advanced jammer is understanding which critical component in a wireless transmission is the bottleneck. Or, equivalently, with the same amount of energy, which jamming target yields the lowest throughput, for an arbitrarily given channel condition. To the best of our knowledge, this problem still remains unsolved, as an analytic model to quantitatively compare advanced jamming schemes targeting at different components of a wireless transmission is still missing in existing literature. As a first step in attempting to fill this gap, in this paper we conduct a comparative analysis of several most viable advanced jamming schemes in the widely-used MIMO network. We first mathematically model a number of advanced jamming schemes at the signal processing level, so that a quantitative relationship between the jamming energy and the jamming effect is established. Based on the model, theorems are derived on the optimal advanced jamming scheme for an arbitrary channel condition. The theoretical findings are validated through extensive simulations and experiments on a 5-radio 2x2 MIMO testbed. Our results show that the theorems are able to predict jamming efficiency with accuracy. Moreover, to further demonstrate that the theoretical findings are applicable to address crucial real-world jamming problems, we show that the theorems can be incorporated to state-of-art reinforcement-learning based jamming algorithms and boost the action exploration phase so that a faster convergence is achieved.

Coping Uncertainty in Coexistence via Exploitation of Interference Threshold Violation

Shaoran Li, Yan Huang, and Chengzhang Li (Virginia Tech); Brian A. Jalaian (U.S. Army Research Laboratory); Y. Thomas Hou and Wenjing Lou (Virginia Tech)

Abstract: In underlay coexistence, secondary users (SUs) attempt to keep their interference to the primary users (PUs) under a threshold. Due to the absence of cooperation from the PUs, there exists much uncertainty at the SUs in terms of channel state information (CSI). An effective approach to cope such uncertainty is to allow occasional interference threshold violation by the SUs, as long as such violation can be tolerated by the PUs. This paper exploits this idea through a chance constrained programming (CCP) formulation, where the knowledge of CSI is limited to only the first and second order statistics rather than its complete distribution information. Our main contribution is the introduction of a novel and powerful technique, namely Exact Conic Reformulation (ECR), to reformulate the intractable chance constraints. ECR guarantees an equivalent reformulation for linear chance constraints into deterministic conic constraints and does not have the limitations associated with the state-of-the-art approach -- Bernstein Approximation. Simulation results confirm that ECR offers significant performance improvement over Bernstein Approximation in uncorrelated channels and a competitive solution in correlated channels (where Bernstein Approximation is no longer applicable). As a result, ECR represents a new tool to fully exploit the benefits of CCP when addressing uncertainty problems in spectrum sharing.

Quiet CTS: CTS Power Control for Better Spatial Reuse in Wi-Fi

Seongwon Kim (SK Telecom), Youngwook Son, Kanghyun Lee, Jaehong Yi, and Sunghyun Choi (Seoul National University, Korea)

Abstract: This paper sheds light on the necessity for transmit power control of 802.11 clear-to-send (CTS) frame for better spatial reuse in densely deployed wireless local area networks (WLANs). We first study the effect of the interference caused by CTS frames through both experiments and numerical analysis. Based on the analysis, we raise a limited spatial reuse problem caused by RTS/CTS mechanism and establish a guideline for controlling CTS transmit power to solve the problem. Next, a standard-compliant CTS transmit power control method, namely Quiet CTS (QCTS), is proposed. Through extensive simulations and prototyping using both software-defined radio and commercial devices, we demonstrate that QCTS enhances the average network throughput by up to 50% by enabling more simultaneous data transmissions.

Forever young: Aging control for smartphones in hybrid networks

Eitan Altman (INRIA); Rachid El-Azouzi (University of Avignon); Daniel Sadoc Menasche (UFRJ); Yuedong Xu (Fudan University)

Abstract: The demand for Internet services that require frequent updates through small messages, also known as microblogging, has tremendously grown in the past few years. Although the use of such applications by domestic users is usually free, their access from mobile devices is subject to fees and consumes energy from limited batteries. If a user activates his mobile device and is in the range of a publisher, an update is received at the expense of monetary and energy costs. Thus, users face a tradeoff between such costs and their messages aging. The goal of this paper is to show how to cope with such a tradeoff, by devising aging control policies. An aging control policy consists of deciding, based on the utility of the owned content, whether to activate the mobile device, and if so, which technology to use (WiFi or cellular). We present a model that yields the optimal aging control policy. Our model is based on a Markov Decision Process (MDP) in which states correspond to content ages. Using our model, we show the existence of an optimal strategy in the class of threshold strategies, wherein users activate their mobile devices if the age of their poadcasts surpasses a given threshold and remain inactive otherwise. The accuracy of our model is validated against traces from the UMass DieselNet bus network.

Abstract: The effectiveness of many well-known network control algorithms (e.g., MaxWeight) relies on the premise that all of the nodes are fully controllable. However, these algorithms may yield poor performance in a partially-controllable network where a subset of nodes are uncontrollable and may take arbitrary (possibly adversarial) actions. Such a partially-controllable model is of increasing importance in real-world networked systems such as overlay-underlay networks and uncooperative wireless networks. In this paper, we study two fundamental network optimization problems in a network with adversarial uncontrollable nodes. First, we investigate the network stability problem where the objective is to stabilize the network whenever possible. We develop a lower bound on the total queue length that can be achieved by any causal policy, and propose a throughput-optimal algorithm, called Tracking-MaxWeight (TMW), which enhances the original MaxWeight algorithm with an explicit learning of the behavior of uncontrollable nodes. Second, we study the network utility maximization problem where the objective is to maximize cumulative utility subject to stability constraints. We provide a lower bound on the utility-delay tradeoff, and develop the Tracking Drift-plus-Penalty (TDP) algorithm that achieves tunable utility-delay tradeoffs.

DECO: Joint Computation, Forwarding and Data Placement in Data-Driven Computing Networks

Khashayar Kamran, Edmund Yeh, and Qian Ma (Northeastern University)

Abstract: The emergence of IoT devices and the predicted increase in the number of data-driven and delay-sensitive applications highlight the importance of dispersed computing platforms (e.g. edge computing and fog computing) that can intelligently manage in-network computation and data placement. In this paper, we propose the DECO (Data-drivEn COmputation) framework for joint computation, caching, and request forwarding in data-driven computing networks. DECO utilizes a virtual control plane which operates on the demand rates for computation and data, and an actual plane which handles computation requests, data requests, data objects and computation results in the physical network. We present a throughput optimal policy within the virtual plane, and use it as a basis for adaptive and distributed computation, caching, and request forwarding in the actual plane. We demonstrate the superior performance of the DECO policy in terms of request satisfaction delay as compared with several baseline policies, through extensive numerical simulations over multiple network topologies.

Age-optimal Sampling and Transmission Scheduling in Multi-Source Systems

Ahmed M. Bedewy (The Ohio State University); Yin Sun (Auburn University); Sastry Kompella (Naval Research Laboratory); Ness B. Shroff (The Ohio State University)

Abstract: In this paper, we consider the problem of minimizing the age of information in a multi-source system, where sources communicate their update packets to a destination via a channel with random delay. Due to interference, only one source can be scheduled at a time. We consider the problem of finding a decision policy that controls the packet sampling times and schedules source transmissions to minimize the total average peak age (TaPA) and the total average age (TaA) of the sources. Our investigation of this problem results in an important separation principle: The optimal scheduling strategy and the optimal sampling strategy are independent of each other. In particular, we prove that, given the sampling times of the update packets, the Maximum Age First (MAF) scheduling strategy provides the best age performance among all scheduling strategies. This transforms our overall optimization problem into an optimal sampling problem, given that the decision policy follows the MAF scheduling strategy. Interestingly, we show that the zero-wait sampler (in which a packet is generated once the channel is idle) is optimal for minimizing the TaPA, while it does not always minimize the TaA. We use Dynamic Programming (DP) to investigate the optimal sampler for minimizing the TaA. Finally, we provide an approximate analysis of Bellman's equation to approximate the TaA-optimal sampler by a water-filling solution and demonstrate its efficacy through numerical evaluations.

Economics of Age of Information Management under Network Externalities

Shugang Hao and Lingjie Duan (Singapore University of Technology and Design)

Abstract: Online content platforms concern about the freshness of their content updates to their end customers, and increasingly more platforms now invite and pay the crowd to share real-time information (e.g., news and sensor data) to help reduce their ages of information (AoI). How much crowdsensed data to sample and buy over time is a critical question for a platform's AoI management, requiring a good balance between its AoI and the incurred sampling cost. This question becomes more interesting by considering the stage after sampling, where two platforms coexist in sharing the content delivery network of limited bandwidth, and one platform's update may jam or preempt the other's under negative network externalities. When the two selfish platforms know each other's sampling costs, we formulate their interaction as a non-cooperative game and show both want to over-sample to reduce their own AoI, causing the price of anarchy (PoA) to be infinity. To remedy this huge efficiency loss, we propose a non-monetary trigger mechanism of punishment in a repeated game to enforce the platforms' cooperation to achieve the social optimum. We also study the more challenging incomplete information scenario that platform 1 knows more information about sampling cost than platform 2 by hiding its sampling cost information in the Bayesian game. Perhaps surprisingly, we show that even platform 1 may get hurt by knowing more information. We successfully redesign the trigger-and-punishment mechanism to negate platform 1's information advantage and ensure no cheating. As compared to the social optimum, we prove that the new mechanism can achieve an approximation ratio 2, as long as the two platforms' average sampling costs are the same for large discount factor.

A Probabilistic Approach for Demand-Aware Ride-Sharing Optimization

Qiulin Lin, Wenjie Xu, and Minghua Chen (The Chinese University of Hong Kong); Xiaojun Lin (Purdue University)

Abstract: Ride-sharing is a modern urban-mobility paradigm with tremendous potential in reducing congestion and pollution. Demand-aware design is a promising avenue for addressing a critical challenge in ride-sharing systems, namely joint optimization of request-vehicle assignment and routing for a fleet of vehicles. In this paper, we develop a probabilistic demand-aware framework to tackle the challenge. We focus on maximizing the expected number of passengers picked up by the fleet, given a probability distribution of future demand. A salient feature of our framework is to assign requests to vehicles in a probabilistic manner. It differentiates our work from existing ones and allows us to explore a richer design space to tackle the request-to-vehicle assignment puzzle with performance guarantee but still keeping the final solution practically implementable. The optimization problem is non-convex, combinatorial, and NP-hard in nature. As a key contribution, we explore the problem structure and propose an elegant approximation of the objective function to develop a dual-subgradient heuristic. We characterize a condition under which the heuristic generates an (1−1/e) approximation solution. Our solution is simple and scalable, amendable for practical implementation. We carry out numerical experiments based on real-world travel request traces in Manhattan. The results show that as compared to a conventional demand-oblivious scheme, our demand-aware solution improves the total number of passenger pickups by 37%. We also show that joint optimization at the fleet level achieves an increment in the number of pickups by 22%, as compared to that each vehicle carry out optimization separately.

DeepFusion: A Deep Learning Framework for the Fusion of Heterogeneous Sensory Data

Hongfei Xue, Wenjun Jiang, and Chenglin Miao (SUNY Buffalo); Ye Yuan (Beijing University of Technology); Fenglong Ma, Xin Ma, and Yijiang Wang (SUNY Buffalo); Shuochao Yao (UIUC); Wenyao Xu, Aidong Zhang, and Lu Su (SUNY Buffalo)

Abstract: In recent years, significant research efforts have been spent towards building intelligent and user-friendly IoT systems to enable a new generation of applications capable of performing complex sensing and recognition tasks. In many of such applications, there are usually multiple different sensors monitoring the same object. Each of these sensors can be regarded as an information source and provides us an unique "view" of the observed object. Intuitively, if we can combine the complementary information carried by multiple sensors, we will be able to improve the sensing performance. Towards this end, we propose DeepFusion, a unified multi-sensor deep learning framework, to learn informative representations of heterogeneous sensory data. DeepFusion can combine different sensors' information weighted by the quality of their data and incorporate cross-sensor correlations, and thus can benefit a wide spectrum of IoT applications. To evaluate the proposed DeepFusion model, we set up two real-world human activity recognition testbeds using commercialized wearable and wireless sensing devices. Experiment results show that DeepFusion can effectively recognize activities and outperform the state-of-the-art human activity recognition methods.

Robot Navigation in Radio Beam Space: Leveraging Robotic Intelligence for Seamless mmWave Network Coverage

Anfu Zhou and Shaoqing Xu (Beijing University of Posts and Telecommunications); Song Wang and Jingqi Huang (University of California San Diego); Shaoyuan Yang (Beijing University of Posts and Telecommunications); Teng Wei and Xinyu Zhang (University of California San Diego); Huadong Ma (Beijing University of Posts and Telecommunications)

Abstract: The emerging millimeter-wave (mmWave) networking technology promises to unleash a new wave of multi-Gbps wireless applications. However, due to high directionality of the mmWave radios, maintaining stable link connection remains an open problem. Users’ slight orientation change, coupled with motion and blockage, can easily disconnect the link. In this paper, we propose miDroid, a robotic mmWave relay that optimizes network coverage through wireless sensing and autonomous motion/rotation planning. The robot relay automatically constructs the geometry/reflectivity of the environment, by estimating the geometries of all signal paths. It then navigates itself along an optimal moving trajectory, and ensures continuous connectivity for the client despite environment/human dynamics. We have prototyped miDroid on a programmable robot carrying a commodity 60 GHz radio. Our field trials demonstrate that miDroid can achieve nearly full coverage in dynamic environment, even with constrained speed and mobility region.

Tile-based Caching Optimization for 360-degree Videos

Georgios Papaioannou and Iordanis Koutsopoulos (Athens University of Economics and Business)

Abstract: Panoramic or 360-degree video streaming and playback offer an immersive viewing experience and become increasingly popular, albeit very resource-demanding. Caching of 360-degree video content close to the user reduces content delivery delay and bandwidth consumption. Contrary to monolithic single-stream transmission, the recently proposed tiling approach allows the streaming of different parts (tiles) of the 360-degree content with different resolutions. We study the problem of optimal caching of 360-degree video streams, where the problem is to decide which tiles and tile resolutions to cache. The difference from conventional video is that each tile may need to be cached with multiple resolutions since it may appear in different positions in different viewports, and the proportions of these positions are dictated by viewing statistics through e.g. Head-Mounted Display units. This observation leads to a novel caching objective: the cached resolutions for each tile should be as close as possible to the required ones for each tile when viewed by the user. We study the cases of caching tile streams either at different resolutions or in a layered encoding fashion. We seek to optimize an objective that combines (i) an error metric between requested and cached tile resolutions across different viewports, and (ii) coverage of the tile set. For the case of multiple resolutions, we show that the problem of selection of tile resolutions to cache so as to minimize the error metric above is equivalent to the K-Medoids problem. For layered encoding, the problem is to find the maximum-resolution layer to cache for each tile stream, and we show that this is equivalent to a Multiple-Choice Knapsack problem. We present an implementation of the cache optimization scheme, evaluate our model on a tiled 360-degree video distribution simulator and demonstrate a significant increase in cache hit ratio over state-of-the-art caching strategies.

DIRECT: Distributed Cross-Domain Resource Orchestration in Cellular Edge Computing

Qiang Liu and Tao Han (The University of North Carolina at Charlotte)

Abstract: Network slicing and edge computing are key technologies to enable compute-intensive applications from vertical industries in 5G. We define cellular networks with edge computing capabilities as cellular edge computing. In this paper, we study the cross-domain resource orchestration solution for dynamic network slicing in cellular edge computing. The fundamental research challenge is from the difficulty in modeling the relationship between the slice performance and resources from multiple technical domains across the network with many base stations and distributed edge servers. To address this challenge, we develop a distributed cross-domain resource orchestration (DIRECT) protocol which optimizes the cross-domain resource orchestration while providing the performance and functional isolations among network slices. The main component of DIRECT is a distributed cross-domain resource orchestration algorithm which is designed by integrating the ADMM method and machine-learning-based optimization. This algorithm efficiently orchestrates multi-domain resources without requiring the performance model of the network slices. We develop and implement the DIRECT protocol in a small-scale prototype of cellular edge computing designed based on OpenAirInterface LTE and CUDA GPU computing platforms. The performance of DIRECT is validated through both experiments and network simulations.

Transient Dynamics of Epidemic Spreading and Its Mitigation on Large Networks

Chul-Ho Lee (Florida Institute of Technology); Srinivas Tenneti (Cisco Systems and North Carolina State University); Do Young Eun (North Carolina State University)

Abstract: In this paper, we aim to understand the transient dynamics of a susceptible-infected (SI) epidemic spreading process on a large network. The SI model has been largely overlooked in the literature, while it is naturally a better fit for modeling the malware propagation in early times when patches/vaccines are not available, or over a wider range of timescales when massive patching is practically infeasible. Nonetheless, its analysis is simply non-trivial, as its important dynamics are all transient and the usual stability/steady-state analysis no longer applies. To this end, we develop a theoretical framework that allows us to obtain an accurate closed-form approximate solution to the original SI dynamics on any arbitrary network, which captures the temporal dynamics over all time and is tighter than the existing approximation, and also to provide a new interpretation via reliability theory. As its applications, we further develop vaccination policies with or without knowledge of already-infected nodes, to mitigate the future epidemic spreading to the extent possible, and demonstrate their effectiveness through numerical simulations.

FingerPass: Finger Gesture-based Continuous User Authentication for Smart Homes Using Commodity WiFi

Hao Kong, Li Lu, and Jiadi Yu (Shanghai Jiao Tong University); Yingying Chen (Rutgers University); Linghe Kong and Minglu Li (Shanghai Jiao Tong University)

Abstract: The development of smart homes has advanced the concept of user authentication to not only protecting user privacy but also facilitating personalized services to users. Toward this direction, we propose to integrate user authentication with human-computer interactions between users and smart household appliances through widely-deployed WiFi infrastructures, which could be non-intrusive and device-free. In this paper, we propose a finger gesture-based user authentication system, called FingerPass, which leverages channel state information (CSI) of the surrounding WiFi signals to continuously authenticate users through finger gestures in smart homes. We investigate CSI of WiFi signals in depth and find CSI phase can be used to capture and distinguish the unique behavioral characteristics from different users. FingerPass separates the user authentication process into two stages, login and interaction, to achieve high authentication accuracy and low response latency simultaneously. In the login stage, we develop a deep learning-based approach to extract behavioral characteristics of finger gestures to achieve high accuracy for user identification. During the interaction stage, to provide continuous user authentication in real time for satisfactory user experience, we design a verification mechanism with lightweight classifiers to continuously authenticate the user’s identity during each interaction of finger gestures. Extensive experiments in real home environments show that FingerPass can achieve 91.4% authentication accuracy, and 186.6ms response time during interactions.

MWSR over an Uplink Gaussian Channel with Box Constraints: A Polymatroidal Approach

Peng-Jun Wan (Illinois Institute of Technology); Zhu Wang (SUNY at Oneonta); Huaqiang Yuan (Dongguan University of Technology); Jiliang Wang (Tsinghua University); Jinling Zhang (Renmin University of China)

Abstract: The rate capacity region of an uplink Gaussian channel is a generalized symmetric polymatroid. Practical applications impose additional lower and upper bounds on the rate allocations, which are represented by box constraints. A fundamental scheduling problem over an an uplink Gaussian channel is to seek a rate allocation maximizing the weighted sum-rate (MWSR) subject to the box constraints. The best-known algorithm for this problem has time complexity O(n⁵ln^{O(1)}n). In this paper, we take a polymatroidal approach to developing a quadratic-time greedy algorithm and a linearithmic-time divide-and-conquer algorithm. A key ingredient of these two algorithms is a linear-time algorithm for minimizing the difference between a generalized symmetric rank function and a modular function after a linearithmic-time ordering.

Abstract: We consider a wireless network with a base station serving multiple traffic streams to different destinations. Packets from each stream arrive to the base station according to a stochastic process and are enqueued in a separate (per stream) queue. The queueing discipline controls which packet within each queue is available for transmission. The base station decides, at every time t, which stream to serve to the corresponding destination. The goal of scheduling decisions is to keep the information at the destinations fresh. Information freshness is captured by the Age of Information (AoI) metric. In this paper, we derive a lower bound on the AoI performance achievable by any given network operating under any queueing discipline. Then, we consider three common queueing disciplines and develop both an Optimal Stationary Randomized policy and a Max-Weight policy under each discipline. Our approach allows us to evaluate the combined impact of the stochastic arrivals, queueing discipline and scheduling policy on AoI. We evaluate the AoI performance both analytically and using simulations. Numerical results show that the performance of the Max-Weight policy is close to the analytical lower bound.

RAN Resource Usage Prediction for a 5G Slice Broker

Craig Gutterman (Columbia University); Edward Grinshpun and Sameer Sharma (Nokia Bell Laboratory); Gil Zussman (Columbia University)

Abstract: Network slicing will allow 5G network operators to offer a diverse set of services over a shared physical infrastructure. We focus on supporting the operation of the Radio Access Network (RAN) slice broker, which maps slice requirements into allocation of Physical Resource Blocks (PRBs). We first develop a new metric, REVA, based on the number of P\underline{RBs available to a single \underline{V}ery \underline{A}ctive bearer}. REVA is independent of channel conditions and allows easy derivation of an individual wireless link's throughput. In order for the slice broker to efficiently utilize the RAN, there is a need for reliable and short term prediction of resource usage by a slice. To support such prediction, we construct an LTE testbed and develop custom additions to the scheduler. Using data collected from the testbed, we compute REVA and develop a realistic time series prediction model for REVA. Specifically, we present the X-LSTM prediction model, based upon Long Short-Term Memory (LSTM) neural networks. Evaluated with data collected in the testbed, X-LSTM outperforms Autoregressive Integrated Moving Average Model (ARIMA) and LSTM neural networks by up to 31%. X-LSTM also achieves over 91% accuracy in predicting REVA. By using X-LSTM to predict future usage, a slice broker is more adept to provision a slice and reduce over-provisioning and SLA violation costs by more than 10% in comparison to LSTM and ARIMA.

Competition with Three-Tier Spectrum Access and Spectrum Monitoring

Arnob Ghosh (Purdue University); Randall Berry (Northwestern University)

Abstract: The Citizens Broadband Radio Service (CBRS) recently adopted in the U.S. enables two tiers of commercial users to share spectrum with a third tier of incumbent users. This sharing can be further assisted by Environmental Sensing Capability operators (ESCs), that monitor the spectrum occupancy to determine when use of the spectrum will not harm incumbents. Two key aspects of this framework that impact how firms may compete are the differences in information provided by different ESCs and the different tiers in which a firm may access the spectrum. We develop a game theoretic model that captures both of these features and analyze it to gain insight into their impact. Our analysis reveals that the amount of unlicensed bandwidth and licensed bandwidth must be chosen judiciously in order to maximize the social welfare.

QFlow: A Reinforcement Learning Approach to High QoE Video Streaming over Wireless Networks

Rajarshi Bhattacharyya, Archana Bura, Desik Rengarajan, Mason Rumuly, Srinivas Shakkottai, and Dileep Kalathil (Texas A&M University); Ricky K. P. Mok and Amogh Dhamdhere (CAIDA, UCSD)

Abstract: Wireless Internet access has brought legions of heterogeneous applications all sharing the same resources. However, current wireless edge networks that cater to worst or average case performance lack the agility to best serve these diverse sessions. Simultaneously, software reconfigurable infrastructure has become increasingly mainstream to the point that dynamic per packet and per flow decisions are possible at multiple layers of the communications stack. Exploiting such reconfigurability requires the design of a system that can enable a configuration, measure the network performance statistics (Quality of Service), learn the impact on the application performance (Quality of Experience), and adaptively select a new configuration. The goal of this work is to design, develop and demonstrate QFlow that instantiates this feedback loop. Our context is that of reconfigurable queueing, and we use the popular application of video streaming as our example. Through experimental validation, we show how measurement, learning and control are combined to enable high QoE video streaming on QFlow.

Quantized VCG Mechanisms for Polymatroid Environments

Hao Ge and Randall A. Berry (Northwestern University)

Abstract: Many network resource allocation problems can be view as allocating a divisible resource, where the allocations are constrained to lie in a polymatroid. We consider market-based mechanisms for such problems. Though the Vickrey-Clarke-Groves (VCG) mechanism can provide the efficient allocation with strong incentive properties (namely dominant strategy incentive compatibility), its well-known high communication requirements can prevent it from being used. There have been a number of approaches for reducing the communication costs of VCG by weakening its incentive properties. Here, instead we take a different approach of reducing communication costs via quantization while maintaining VCG’s dominant strategy incentive properties. The cost for this approach is a loss in efficiency which we characterize.We first consider quantizing the resource allocations so that agents need only submit a finite number of bids instead of full utility function. We subsequently consider quantizing the agent’s bids.

Online Scheduling of Traffic Diversion and Cloud Scrubbing with Uncertainty in Current Inputs

Lei Jiao (University of Oregon); Ruiting Zhou (Wuhan University); Xiaojun Lin (Purdue University); Xu Chen (Sun Yat-sen University)

Abstract: Operating distributed Scrubbing Centers (SCs) to mitigate massive Distributed Denial of Service (DDoS) traffic in large-scale networks faces critical challenges. The operator needs to determine in an online manner the diversion rule installation and elimination in the networks, as well as the scrubbing resource activation and revocation in the SCs, while minimizing the long-term cost and the cumulative decision-switching penalty, without knowing the exact amount of the malicious traffic. We model and formulate this problem as an online and nonlinear integer program. In contrast to many other online problems where future inputs are unknown but at least current inputs are known, a key new challenge here is that even part or all of the current inputs are unknown when decisions are made. To "learn" the best decisions online, we transform our problem via a gap-preserving approximation into an online optimization problem with the known current inputs, which is in turn relaxed and decoupled into a series of one-shot convex programs solvable in individual time slots. Further, to overcome the integral intractability, we design a progressive rounding algorithm to convert fractional decisions into integral ones without violating any constraints. We formally characterize the competitive ratio of our proposed approach as a function of the key parameters of our problem. We conduct extensive evaluations using real-world data and confirm our algorithms' superiority over de facto practices and state-of-the-art methods.

ERSCC: Efficient and Reliable Screen-Camera Communication

Ouyang Zhang, Zhenzhi Qian, Yifan Mao, Ness Shroff, and Kannan Athreya (The Ohio State University)

Abstract: Each camera on digital devices is made of hundreds of thousands of sensors, with which it can separate and capture the light from spatial points in a fine-grain manner, enabling high-rate data transfer from the digital display to the cam- era, i.e. screen-camera communication. Compared with RF technologies, visible light approaches are more convenient and secure. In this work, we design a mobile-to-mobile SCC system ERSCC, to handle the rolling shutter issue and increase the reliability. First, a time-domain self-restoration coding scheme (TDSR) is proposed to identify mixed frames for dou- bling the coding efficiency. Second, we design a feedback channel for reliability. It is further integrated with an adap- tive scheme to achieve robustness in challenging conditions. We implement the system prototype on the Android platform and conduct extensive experiments. The results show that ERSCC increases the throughput by an order-of-magnitude (16x) compared with existing rolling-shutter solutions and two times (1.8x) over the color-barcode system.

Scaling Deep Learning Models for Spectrum Anomaly Detection

Zhijing Li (UCSB); Zhujun Xiao (University of Chicago); Bolun Wang (UCSB); Ben Y. Zhao and Haitao Zheng (University of Chicago)

Abstract: Spectrum management in cellular networks is a challenging task that will only increase in difficulty as complexity grows in hardware, configurations, and new access technology (e.g. LTE for IoT devices). Wireless providers need robust and flexible tools to monitor and detect faults and misbehavior in physical spectrum usage, and to deploy them at scale. In this paper, we explore the design of such a system by building deep neural network (DNN) models to capture spectrum usage patterns and use them as baselines to detect spectrum usage anomalies resulting from faults and misuse. Using detailed LTE spectrum measurements, we show that the key challenge facing this design is model scalability, i.e. how to train and deploy DNN models at a large number of static and mobile observers located throughout the network. We address this challenge by building context-free DNN models for spectrum usage and applying transfer learning to minimize training time and dataset constraints. The end result is a unified DNN model that can be easily deployed on both mobile and static observers, enabling timely detection of spectrum anomalies across LTE networks.

Timely-Throughput Optimal Coded Computing over Cloud Networks

Chien-Sheng Yang (University of Southern California); Ramtin Pedarsani (University of California, Santa Barbara); A. Salman Avestimehr (University of Southern California)

Abstract: In modern distributed computing systems, unpredictable and unreliable infrastructures result in high variability of computing resources. Meanwhile, there is significantly increasing demand for timely and event-driven services with deadline constraints. Motivated by measurements over Amazon EC2 clusters, we consider a two-state Markov model for variability of computing speed in cloud networks. In this model, each worker can be either in a good state or a bad state in terms of the computation speed, and the transition between these states is modeled as a Markov chain which is unknown to the scheduler. We then consider a Coded Computing framework, in which the data is possibly encoded and stored at the worker nodes in order to provide robustness against nodes that may be in a bad state. With timely computation requests submitted to the system with computation deadlines, our goal is to design the optimal computation-load allocation scheme and the optimal data encoding scheme that maximize the timely computation throughput (i.e, the average number of computation tasks that are accomplished before their deadline). Our main result is the development of a dynamic computation strategy called Lagrange Estimate-and-Allocate (LEA) strategy, which achieves the optimal timely computation throughput. It is shown that compared to the static allocation strategy, LEA increases the timely computation throughput by 1.4× ∼ 17.5× in various scenarios via simulations and by 1.27× ∼ 6.5× in experiments over Amazon EC2 clusters.

FREE: A Fast and Robust Key Extraction Mechanism via Inaudible Acoustic Signal

Youjing Lu and Fan Wu (Shanghai Jiao Tong University, P.R. China); Shaojie Tang (University of Texas at Dallas, USA); Linghe Kong and Guihai Chen (Shanghai Jiao Tong University, P.R. China)

Abstract: To build a secure wireless networking system, it is essential that the cryptographic key is known only to the two (or more) communicating parties. Existing key extraction schemes put the devices into the physical proximity, and utilize the common inherent randomness between the devices to agree on a secret key, but they often rely on custom devices and have low bit rate. In this paper, we seek a key extraction approach that only leverages off-the-shelf mobile devices, while achieving significantly higher key generation efficiency. The core idea of our approach is to exploit fast varying inaudible acoustic channel for generating enough randomness and wireless parallel communication for exchanging reconciliation information to improve the key generation rate. We have carefully studied and validated the feasibility of our approach through both theoretical analysis and a variety of measurements. We implement our approach on different mobile devices and conduct extensive experiments in different real scenarios. The experiment results show that our approach achieves high efficiency and satisfactory robustness. Compared with the state of art methods, our approach improves the key generation rate by 38.46% and reduces the bit mismatch ratio by 42.34%.

Predictive Online Server Provisioning for Cost-Efficient IoT Data Streaming Across Collaborative Edges

Zhi Zhou, Xu Chen, Weigang Wu, and Di Wu (Sun Yat-sen University); Junshan Zhang (Arizona State University)

Abstract: Edge computing is envisioned to be the de-facto standard of hosting emerging low latency Internet-of-Things (IoT) data streaming services as exemplified by live video analytics. For IoT data streaming in edge computing, cost management is of strategic significance, due to the low cost-efficiency of edge servers. While existing literature adopts a reactive approach to dynamically provisioning servers to reduce cost, the time overhead of server provisioning has been mostly ignored. In this paper, we target a proactive approach to dynamic server provisioning for IoT data streaming across edge nodes, which adjusts server provisioning ahead of time, based on prediction of the upcoming workload. To effectively predict upcoming workload, a learning-based method online gradient descent is applied. We further combine the online learning method with an online optimization algorithm for server provisioning in a joint online optimization framework, through (1) minimizing of the regret incurred by inaccurate workload prediction, and (2) minimizing the cost incurred by sub-optimal online decisions. The resulting predictive online algorithm achieves a good performance guarantee, as verified by both rigorous theoretical analysis and extensive trace-driven simulations.

MIMO with Energy Recycling

Y. Ozan Basciftci (The Ohio State University); Amr Abdelaziz (The Military Technical College); Ahmed Bendary and C. Emre Koksal (The Ohio State University)

Abstract: In this paper, multiple-input-single-output (MISO) point-to-point communication system is considered, in which a multiple-input-multiple-output (MIMO) energy recycling (ER) transmitter is designed such that, each antenna can transmit information (multi-input) or recycle energy (multi-output) at any given point in time. The rate by such an ER-MISO communication system under an average transmission power constraint is shown to be achievable. Moreover, optimal power allocation and dynamic antenna selection policies that achieve the maximum communication rate are derived. The optimal dynamic antenna selection policy carefully switches the mode of the antennas between active-antennas (transmitting) and ER-antennas, where most of the harvested energy happens from the neighboring antennas’ transmissions, i.e., recycling. In addition, it is shown that, with ER, the achievable rate exceeds the capacity of the classical non-recycling counterpart. Since the complexity of the optimal dynamic antenna selection is exponential with the number of antennas, a linearithmic algorithm that has minimal degradation in the achievable rate is proposed. Numerical results show that notable gains can be obtained by enabling ER. To address the major questions on the capability of ER and the impact of antenna coupling, hardware setup and experimental results for a 4-antenna ER-transmitter are developed, based on a uniform linear array (ULA). As a result, Hardware measurements for the recycled energy and the received power at the target receiver indicate that the loss in the rate due to antenna coupling can be eliminated with sufficient antenna spacing and therefore, ER can achieve significant gain.

Side-information-aided Non-coherent Beam Alignment Design for Millimeter Wave Systems

Yi Zhang, Kartik Patel, Sanjay Shakkottai, and Robert W. Heath Jr. (The University of Texas at Austin)

Abstract: Designing efficient and robust beam alignment strategies for millimeter wave (mmWave) systems is important for overcoming training overheads and practical hardware impairments. In this work, we leverage side information in the form of prior knowledge of the angular support of the propagation channel (direction information) to design a compressive sensing (CS) based beam alignment algorithm. Existing CS based channel estimation approaches assume perfect phase information of the measurements, which is not the case with the low-cost off-the-shelf mmWave phased arrays. Instead, we develop a two-stage algorithm where we use the magnitude of measurements (aka non-coherent measurements); using phase retrieval (PR) followed by sparse recovery, we estimate the channel gain across various (quantized) spatial angles. To validate the proposed algorithm, we develop a fully reconfigurable mmWave testbed with custom-made 2-bit phased arrays. We perform a careful calibration to the phased arrays, thus enabling generations of precise desired beam patterns. Our implementation and real experiments validate both the proposed algorithm and calibration process by demonstrating consistency between the experimental results and the theoretical analysis.

Barracuda: Theoretical Analysis of the Power of l-polling in Proof-of-Stake Blockchains

Giulia Fanti (CMU); Jiantao Jiao (UC Berkeley); Ashok Makkuva, Sewoong Oh, Ranvir Rana, and Pramod Viswanath (UIUC)

Abstract: A blockchain is a database of sequential events that is maintained by a distributed group of nodes. A key consensus problem in blockchains is that of determining the next block (data element) in the sequence. Many blockchains address this by electing a new node to propose each new block. The new block is (typically) appended to the tip of the proposer’s local blockchain, and subsequently broadcast to the rest of the network. Without network delay (or adversarial behavior), this procedure would give a perfect chain, since each proposer would have the same view of the blockchain. A major challenge in practice is forking. Due to network delays, a proposer may not yet have the most recent block, and may therefore create a side chain that branches from the middle of the main chain. Forking reduces throughput, since only one a single main chain can survive, and all other blocks are discarded. We propose a new P2P protocol for blockchains called Barracuda, in which each proposer, prior to proposing a block, polls l other nodes for their local blocktree information. Under a canonical stochastic network model, we prove that this lightweight primitive strongly ameliorates the informational imbalance: the resulting throughput is as if the entire network were a factor of l faster. We provide guidelines on how to implement Barracuda in practice, with a specific emphasis on proof-of-stake blockchains, guaranteeing robustness against several real-world factors.

Wideband Full-Duplex Phased Array with Joint Transmit and Receive Beamforming: Optimization and Rate Gains

Tingjun Chen, Mahmood Baraani Dastjerdi, Harish Krishnaswamy, and Gil Zussman (Columbia University)

Abstract: Full-duplex (FD) wireless and phased arrays are both promising techniques that can significantly improve data rates in future wireless networks. However, integrating FD with transmit (Tx) and receive (Rx) phased arrays is extremely challenging, due to the large number of self-interference (SI) channels. Previous work relies on either RF canceller hardware or on analog/digital Tx beamforming (TxBF) to achieve SI cancellation (SIC). However, Rx beamforming (RxBF) and the data rate gain introduced by FD nodes employing beamforming have not been considered yet. We study FD phased arrays with joint TxBF and RxBF with the objective of achieving improved FD data rates. The key idea is to carefully select the TxBF and RxBF weights to achieve wideband RF SIC in the spatial domain with minimal TxBF and RxBF gain losses. Essentially, TxBF and RxBF are repurposed, thereby not requiring specialized RF canceller circuitry. We formulate the corresponding optimization problem and develop an iterative algorithm to obtain an approximate solution with provable performance guarantees. Using SI channel measurements and datasets, we extensively evaluate the performance of the proposed approach in different use cases under various network settings. The results show that an FD phased array with 9/36/72 elements can cancel the total SI power to below the noise floor with sum TxBF and RxBF gain losses of 10.6/7.2/6.9 dB, even at Tx power level of 30 dBm. Moreover, the corresponding FD rate gains are at least 1.33/1.66/1.68x.

Back to Top