Room: E-VI Room 289
Room: E-VI Mong Auditorium (Room 180)
Room: E-VI Room 134
Room: E-VI Room 176
Room: E-VI Room 176
Room: E-VI Room 289
Room: E-VI Room 134
Session Chair: Mario Gerla and Suhas Diggavi, conference chairs
Room: E-VI Mong Auditorium (Room 180)
Abstract: Caching is an important component of content distribution networks. The traditional role of caching is to allow delivery of content to an end user from a nearby cache rather than from a remote server. While this approach is optimal for systems with a single cache, recent work in Information Theory shows this to be significantly suboptimal for systems with multiple caches. Instead, cache memories should be used to enable coded multicasting opportunities. This talk surveys these recent developments. We discuss both the fundamental performance limits of cache networks and the practical challenges that need to be overcome in real-life scenarios.
Session Chair: Yung Yi (KAIST)
Room: E-VI Mong Auditorium (Room 180)
Suining He and Kang G. Shin (The University of Michigan, Ann Arbor)
Abstract: Thanks to their great success as a green urban transportation means of first/last-mile connectivity, bike sharing service (BSS) networks has been proliferating all over the globe. Their station (re)placement and dock resizing has thus become an increasingly important problem for bike sharing service providers.
In contrast to the use of conventional labor-intensive user surveys, we propose a novel optimization framework called CBikes, (re)configuring the BSS network with crowdsourced station suggestions from online websites. Based on comprehensive real data analyses, we identify and utilize important global trip patterns to (re)configure the BSS network while balancing the local biases of individual feedbacks. Specifically, crowdsourced feedbacks, station usage history, cost and other constraints are fused into a joint optimization of BSS network configuration. We further design a semidefinite programming transformation to solve the bike station (re)placement problem efficiently and effectively. Our evaluation has demonstrated the effectiveness and accuracy of CBikes in (re)placing stations and resizing docks based on 3 large BSS systems (with more than 900 stations) in Chicago, Twin Cities, and Los Angeles, as well as related crowdsourced feedbacks.
Chenglin Miao (State University of New York at Buffalo), Qi Li (University of Illinois at Urbana-Champaign), and Houping Xiao, Wenjun Jiang, Mengdi Huai, and Lu Su (State University of New York at Buffalo)
Abstract: With the proliferation of sensor-rich mobile devices, crowd sensing has emerged as a new paradigm of collecting information from the physical world. However, the sensory data provided by the participating workers are usually not reliable. In order to identify truthful values from the crowd sensing data, the topic of truth discovery, whose goal is to estimate each worker’s reliability and infer the underlying truths through weighted data aggregation, is widely studied. Since truth discovery incorporates workers’ reliability into the aggregation procedure, it shows robustness to the data poisoning attacks, which are usually conducted by the malicious workers who aim to degrade the effectiveness of the crowd sensing systems through providing malicious sensory data. However, truth discovery is not perfect in all cases. In this paper, we study how to effectively conduct two types of data poisoning attacks, i.e., the availability attack and the target attack, against a crowd sensing system empowered with the truth discovery mechanism. We develop an optimal attack framework in which the attacker can not only maximize his attack utility but also disguise the introduced malicious workers as normal ones such that they cannot be detected easily. The desirable performance of the proposed framework is verified through extensive experiments conducted on a real-world crowd sensing system.
Xiaowen Gong (Auburn University); Ness Shroff (The Ohio State University)
Session Chair: Chiara Petrioli (Un. of Rome La Sapienza)
Room: E-VI Mong Auditorium (Room 180)
Mohit Hota and Sanjiv Kapoor (Illinois Institute of Technology)
Abstract: With increase in the number of connected wireless devices, simultaneous use of multiple wireless networks has been proposed as a solution to improve throughput and reduce congestion. The autonomous choice of networks by clients leads to a network selection game, where selfish clients strategize to find the best set of providers. In this network selection game, we define client utility as a function of the throughput achieved when the number of such network connections is constrained. The throughput model is based on capacity sharing and incorporates link dependent PHY rates and priority levels of the clients. This paper studies the stability of the network selection game and provides bounds on the inefficiency of such games. We also present results based on simulation studies.
Stefano Basagni (ECE Dept., Northeastern University, Boston, MA) and Valerio Di Valerio, Petrika Gjanci, and Chiara Petrioli (Dipartimento di Informatica, Università di Roma “La Sapienza,” Roma, Italy)
Abstract: We demonstrate the feasibility of long lasting underwater networking by proposing the smart exploitation of the energy harvesting capabilities of underwater sensor nodes. We define a data routing framework that allows senders to select the best forwarding relay taking into account both residual energy and foreseeable harvestable energy. Our forwarding method, named HyDRO, for Harvesting-aware Data ROuting, is also configured to consider channel conditions and route-wide residual energy, performing network wide optimization via local information sharing. The performance of our protocol is evaluated via simulations in scenarios modeled to include realistic underwater settings as well as energy harvesting based on recorded traces. HyDRO is compared to state-of-the-art forwarding protocols for underwater networks. Our results show that jointly considering residual and predicted energy availability is key to achieve lower energy consumption and latency, while obtaining much higher packet delivery ratio. Finally, compared to when run in networks without energy harvesters, HyDRO achieves remarkably higher performance, providing a compelling case for energy harvesting-enabled underwater wireless sensor networks.
David Applegate and Aaron Archer (Google), David S. Johnson (deceased), Evdokia Nikolova (University of Texas at Austin), Mikkel Thorup (University of Copenhagen), and Ger Yang (University of Texas at Austin)
Abstract: When deciding where to place access points in a wireless network, it is useful to model the signal propagation loss between a proposed antenna location and the areas it may cover. The indoor dominant path (IDP) model, introduced by Wölfle et al., is shown in the literature to have good validation and generalization error, is faster to compute than competing methods, and is used in commercial software such as WinProp, iBwave Design, and CellTrace. The previous algorithms known for computing it involved a worst-case exponential-time tree search, with pruning heuristics for speed.
We prove that the IDP model can be reduced to a parametric shortest path computation on a graph derived from the walls in the floorplan. It therefore admits a quasipolynomial-time (i.e., <span class=math inline">nO(log n)</span>) algorithm. Moreover, we give a practical approximation algorithm based on running a small constant number of shortest path computations. Its provable worst-case additive error (in dB) can be made arbitrarily small, and is well below 1dB for reasonable choices of parameters. We evaluate this algorithm empirically against the exact IDP model, showing that it consistently beats its theoretical worst-case bounds, solving the model exactly (i.e., no error) in the vast majority of cases.
Session Chair: Thomas B. Stahlbuhk (MIT)
Room: E-VI Mong Auditorium (Room 180)
Huacheng Zeng and Hongxiang Li (University of Louisville) and Qiben Yan (University of Nebraska-Lincoln)
Abstract: In wireless LANs (WLANs), network-wide time and frequency synchronization among user devices is widely regarded as a necessity for uplink MU-MIMO. Therefore, to enable uplink MU-MIMO in 802.11ax, dedicated MAC protocols (e.g., trigger frame and timing advance mechanism) have been proposed to synchronize user devices in the time and frequency domains. Such MAC protocols increase not only network complexity but also communication overhead. In this paper, we show that the time and frequency synchronization among user devices is not a necessity for uplink MU-MIMO. We propose a practical uplink MU-MIMO solution which does not require time and frequency alignments of the signals from user devices. The key component in our solution is a new PHY design for AP’s receiver, which can decode the concurrent signals from multiple asynchronous user devices. We have built a prototype of our uplink MU-MIMO solution on USPR2-GNURadio testbed. Experimental results show that, using the new PHY, an M-antenna AP can successfully decode the concurrent signals from M asynchronous user devices (2 ≤ M ≤ 4).
Muhammad Kumail Haider, Yasaman Ghasempour, and Edward W. Knightly (Rice University)
Abstract: We present SearchLight, a system that enables adaptive steering of highly directional 60 GHz beams via passive sensing of visible light from existing illumination sources. The key idea is to simultaneously track a mobile device’s position and orientation using intensity measurements from lighting infrastructure, and to adapt client and AP beams to maintain beam alignment, without training overhead or outages in the 60 GHz band. Our implementation on custom dual-band hardware with 2 GHz wide channels and 24-element, electronically steerable phased array antennas shows that SearchLight successfully tracks client mobility and achieves up to 3x throughput gains compared to an in-band training strategy, and eliminates millisecond-scale in-band training epochs.
Ding Zhang, Mihir Garude, and Parth H. Pathak (George Mason University)
Abstract: 60 GHz millimeter-wave WLANs are gaining traction with their ability to provide multi-gigabit per second data rates. In spite of their potential, link outages due to human body blockage remain a challenging outstanding problem. In this work, we propose mmChoir, a novel proactive blockage mitigation technique that utilizes joint transmissions from multiple Access Points (APs) to provide blockage resilience to clients. We derive a new reliability metric based on angular spread of incoming paths to a client and their blockage probabilities. The metric can be used to intelligently select joint transmissions that can provide higher reliability. The reliability metric along with a novel interference estimation model, is used by mmChoir’s scheduler to judiciously schedule joint transmissions, and increase network capacity and reliability. Our testbed and trace-driven simulations show that can outperform existing beamswitching based blockage mitigation scheme with on an average 58% higher network throughput.
Session Chair: Rajat Talak (MIT)
Room: E-VI Mong Auditorium (Room 180)
Chang-Sik Choi, Francois Baccelli, and Gustavo de Veciana (University of Texas at Austin)
Abstract: Disruptive changes are underway in the automotive industry as large-scale platforms based on vehicular fleets are deployed to deliver ride sharing and delivery services. Such platforms can also be leveraged to deliver wireless connectivity services, e.g., large-scale connectivity for the Internet of Things (IoT). This paper examines a network architecture based on a mesh of IoT devices, roadside repositories and vehicular mobile gateways – referred to as mesh+vehicular. We propose a system-level model to study its relative merits versus conventional infrastructure-based IoT architectures– referred to as mesh+cellular. The model reflects the salient properties of the architectures including the key interplay among the variability in the network geometries, routing trees, wireless capacity, and eventually IoT queue stability.
The paper provides an initial study of the scaling of the IoT sensing capacity of the routing trees per repository and base station respectively for the two architectures: i.e., the scaling the maximum common traffic rate the trees’ IoT devices can generate while maintaining the stability of its queues. We then define the harvesting capacity per mobile gateway and base station in the two architectures, i.e., the average aggregate IoT rate each can extract assuming IoT devices are limited to their sensing capacity in each tree. Perhaps surprisingly, we show that as the spatial density <span class=math inline">λs</span> of IoT devices and corresponding density of repositories along roads scale up, the proposed mesh+vehicular architecture has a gain in its harvesting capacity of order at least λsγ/4 where γ is the wireless path loss exponent. Underlying such gains is a fundamental shift in network geometry and information flows: in mesh+cellular systems IoT data is routed toward cells’ sinks (zero-dimensional objects) while in mesh+vehicular data is routed to road induced cell edges (one-dimensional objects). Detailed system-level simulations validate the obtained scaling results.
Wei Bao, Yonghui Li, and Branka Vucetic (University of Sydney)
Abstract: Base station (BS) cooperation has been regarded as an effective solution to improve network coverage and throughput in next-generation wireless systems. However, it also introduces more complicated handoff patterns, which may potentially degrade user performance. In this work, we aim to theoretically quantify users’ handoff performance in disjoint-clustered cooperative wireless networks. It is a challenging task due to spatial randomness of network topologies. We propose a stochastic geometric model on user mobility, and use it to derive a theoretical expression for the handoff rate experienced by an active user with arbitrary movement trajectory. As a study on the application of the handoff rate analysis, we furthermore characterize the average downlink user data rate under a common non-coherent joint-transmission scheme, which is then used to derive a tradeoff between handoff and data rates in a network with an optimal cooperative cluster size. Finally, extensive simulations are conducted to validate our analysis.
Chaoyun Zhang and Paul Patras (The University of Edinburgh)
Abstract: Forecasting with high accuracy the volume of data traffic that mobile users will consume is becoming increasingly important for precision traffic engineering, demand-aware network resource allocation, as well as public transportation. Measurements collection in dense urban deployments is however complex and expensive, and the post-processing required to make predictions is highly non-trivial, given the intricate spatio-temporal variability of mobile traffic due to user mobility. To overcome these challenges, in this paper we harness the exceptional feature extraction abilities of deep learning and propose a Spatio-Temporal neural Network (STN) architecture purposely designed for precise network-wide mobile traffic forecasting. We present a mechanism that fine tunes the STN and enables its operation with only limited ground truth observations. We then introduce a Double STN technique (D-STN), which uniquely combines the STN predictions with historical statistics, thereby making faithful long-term mobile traffic projections. Experiments we conduct with real-world mobile traffic data sets, collected over 60 days in both urban and rural areas, demonstrate that the proposed (D-)STN schemes perform up to 10-hour long predictions with remarkable accuracy, irrespective of the time of day when they are triggered. Specifically, our solutions achieve up to 61% smaller prediction errors as compared to widely used forecasting approaches, while operating with up to 600 times shorter measurement intervals.
Abstract: Freight transportation is of utmost importance for our society. Road transport accounts for about 26% of all energy consumption and 18% of greenhouse gas emissions in the EU. Despite this influence, road goods transportation is mainly done by individual long-haulage trucks with no real-time coordination or global optimisation. In this talk, we will discuss how modern information and communication technology supports a cyber-physical transportation system architecture with an integrated logistic system coordinating fleets of trucks travelling together in vehicle platoons. From the reduced air drag, platooning trucks travelling close together can save more than 10% of their fuel consumption. In addition, by automating the driving, it is possible to change driver regulations and save even more costs. Control, communication, and estimation problems on various level of this transportation system will be presented. It will be argued that a system architecture utilising vehicle-to-vehicle and vehicle-to-infrastructure communication enable robust and safe control of individual trucks as well as optimised vehicle fleet collaborations and new market opportunities. Extensive experiments done on European highways will illustrate system performance and safety requirements. The presentation will be based on joint work over the last ten years with collaborators at KTH and at the truck manufacturer Scania.
Session Chair: Michael Neely (USC)
Room: E-VI Mong Auditorium (Room 180)
Rajat Talak, Sertac Karaman, and Eytan Modiano (MIT)
Abstract: Age of information (AoI) is a recently proposed metric for measuring information freshness. AoI measures the time that elapsed since the last received update was generated. We consider the problem of minimizing average and peak AoI in wireless networks under general interference constraints. When fresh information is always available for transmission, we show that a stationary scheduling policy is peak age optimal. We also prove that this policy achieves average age that is within a factor of two of the optimal average age. In the case where fresh information is not always available, and packet/information generation rate has to be controlled along with scheduling links for transmission, we prove an important separation principle: the optimal scheduling policy can be designed assuming fresh information, and independently, the packet generation rate control can be done by ignoring interference. Peak and average AoI for discrete time G/Ber/1 queue is analyzed for the first time, which may be of independent interest.
Ning Lu (Thompson Rivers University), Bo Ji (Temple University), and Bin Li (University of Rhode Island)
Abstract: We consider the problem of scheduling real-time traffic with hard deadlines in a wireless ad hoc network. In contrast to existing real-time scheduling policies that merely ensure a minimal timely throughput, our design goal is to provide guarantees on both the timely throughput and data freshness in terms of age-of-information (AoI), which is a newly proposed metric that captures the “age” of the most recently received information at the destination of a link. The main idea is to introduce the AoI as one of the driving factors in making scheduling decisions. We first prove that the proposed scheduling policy is feasibility-optimal, i.e., satisfying the per-traffic timely throughput requirement. Then, we derive an upper bound on a considered data freshness metric in terms of AoI, demonstrating that the network-wide data freshness is guaranteed and can be tuned under the proposed scheduling policy. Interestingly, we reveal that the improvement of network data freshness is at the cost of slowing down the convergence of the timely throughput. Extensive simulations are performed to validate our analytical results. Both analytical and simulation results confirm the capability of the proposed scheduling policy to improve the data freshness without sacrificing the feasibility optimality.
Simon Yau, Ping-Chun Hsieh, Rajarshi Bhattacharyya, Kartic Bhargav K R, Srinivas Shakkottai, I-Hong Hou, and P R Kumar (Texas A&M University, USA)
Abstract: An increasing number of applications that will be supported by next generation wireless networks require packets to arrive before a certain deadline for the system to have the desired performance. While many time-sensitive scheduling protocols have been proposed, few have been experimentally evaluated to establish realistic performance. Furthermore, some of these protocols involve high complexity algorithms that need to be performed on a per-packet basis. Experimental evaluation of these protocols requires a flexible platform that is readily capable of implementing and experimenting with these protocols. We present PULS, a processor-supported ultra low latency scheduling implementation for testing of downlink scheduling protocols with ultra-low latency requirements. Based on our decoupling architecture, programmability of delay sensitive scheduling protocols is done on a host machine, with low latency mechanisms being deployed on hardware. This enables flexible scheduling policies on software and high hardware function re-usability, while meeting the timing requirements of a MAC. We performed extensive tests on the platform to verify the latencies experienced for per packet scheduling, and present results that show packets can be scheduled and transmitted under 1 ms in PULS. Using PULS, we implemented four different scheduling policies and provide detailed performance comparisons under various traffic loads and real-time requirements. We show that in certain scenarios, the optimal policy can maintain a loss ratio of less than 1% for packets with deadlines, while other protocols experience loss ratios of up to 65%.
Sébastien Henri (EPFL, Switzerland), Seva Shneer (Heriot-Watt University, UK), and Patrick Thiran (EPFL, Switzerland)
Abstract: In all networks, link or route capacities fluctuate for multiple reasons, e.g., fading and multi-path effects on wireless channels, interference and contending users on a shared medium, varying loads in WAN routers, impedance changes on power-line channels. These fluctuations severely impact packet delays. In this paper, we study delays in time-varying networks. Intuitively, we expect that for a given average service rate, an increased service rate variability yields larger delays. We find that this is not always the case. Using a queuing model that includes time-varying service rates, we show that for certain arrival rates, a queue with larger service rate variance offers smaller average delays than a queue with the same average service rate and lower service rate variance. We also verify these findings on a wireless testbed. We then study the conditions under which using simultaneously two independent paths helps in terms of delays, for example, in hybrid networks where the two paths use different physical layer technologies. We show that using two paths is not always better, in particular for low arrival rates. We also show that the optimal traffic splitting between the two paths depends on the arrival rate.
Session Chair: Srinivas Shakkottai (TAMU)
Room: E-VI Mong Auditorium (Room 180)
Zhixuan Fang and Longbo Huang (Tsinghua University) and Adam Wierman (California Institute of Technology)
Abstract: Loyalty programs are important tools for sharing platforms seeking to grow supply. Online sharing platforms use loyalty programs to heavily subsidize resource providers, encouraging participation and boosting supply. As the sharing economy has evolved and competition has increased, the design of loyalty programs has begun to play a crucial role in the pursuit of maximal revenue. In this paper, we first characterize the optimal loyalty program for a platform with homogeneous users. We then show that optimal revenue in a heterogeneous market can be achieved by a class of multi-threshold loyalty program (MTLP) which admits a simple implementation-friendly structure. We also study the performance of loyalty programs in a setting with two competing sharing platforms, showing that the degree of heterogeneity is a crucial factor for both loyalty programs and pricing strategies. Our results show that sophisticated loyalty programs that reward suppliers via stepwise linear functions outperform simple sign-up bonuses, which give them a one time reward for participating.
Zhiyuan Wang (The Chinese University of Hong Kong), Lin Gao (Harbin Institute of Technology), and Jianwei Huang (The Chinese University of Hong Kong)
Abstract: Mobile network operators (MNOs) have been offering mobile data plans with different data caps and subscription fees as an effective way of achieving price discrimination and improving revenue over the past decade. Recently, some MNOs are investigating innovative data plans with time flexibility based on the multi-cap scheme. The rollover data plan (allowing the unused data to roll over to the next month) and the credit data plan (allowing users to borrow their data quota from future months) are such innovative data plans with time flexibility (backward and forward). In this paper, we study how the MNO optimizes its multi-cap data plan with time flexibility in the realistic asymmetric information scenario, where each user is associated with multi-dimensional private information, i.e., the data valuation and the network substitutability. Specifically, we consider a multi-dimensional contract-theoretic approach, and analyze the optimal data caps and the subscription fees design systematically. We find that each user’s willingness-to-pay for a particular data cap can be captured by the slope of his indifference curve on the contract plane, and the feasible contract (satisfying the incentive compatibility and individual rationality conditions) will allocate larger data caps for users with higher willingness-to-pay. Furthermore, we conduct a market survey to estimate the statistical distribution of users’ private information, and examine the performance of our proposed multi-dimensional contract design using the empirical data. Numerical results further reveal that the optimal contract may provide price discounts (i.e., negative subscription fees) to attract low valuation users to select a small-cap (possibly zero-cap) contract item. A data mechanism with better time flexibility brings users higher payoffs and the MNO more profit, hence increases the social welfare.
Hyojung Lee, Jihwan Bang, and Yung Yi (KAIST)
Abstract: Mobile data crowdsourcing has found a wide range of applications (e.g., spectrum sensing, environmental monitoring) by leveraging the “wisdom” of a potentially large crowd of “workers” (i.e., mobile users). A key metric of crowdsourcing is data accuracy, which relies on the quality of the participating workers’ data (e.g., the probability that the data is equal to the ground truth). However, the data quality of a worker can be its own private information (which the worker learns, e.g., based on its location) that it may have incentive to misreport, which can in turn mislead the crowdsourcing requester about the accuracy of the data. This issue is further complicated by the fact that the worker can also manipulate its effort made in the crowdsourcing task and the data reported to the requester, which can also mislead the requester. In this paper, we devise truthful crowdsourcing mechanisms for Quality, Effort, and Data Elicitation (QEDE), which incentivize strategic workers to truthfully report their private worker quality and data to the requester, and make truthful effort as desired by the requester. The truthful design of the QEDE mechanisms overcomes the lack of ground truth and the coupling in the joint elicitation of worker quality, effort, and data. Under the QEDE mechanisms, we characterize the socially optimal and the requester’s optimal task assignments, and analyze their performance. We show that the requester’s optimal assignment is determined by the largest “virtual valuation” rather than the highest quality among workers, which depends on the worker’s quality and the quality’s distribution. We evaluate the QEDE mechanisms using simulations which demonstrate the truthfulness of the mechanisms and the performance of the optimal task assignments.
Session Chair: Bin Li (URI)
Room: E-VI Mong Auditorium (Room 180)
Thomas Stahlbuhk (MIT), Brooke Shrader (MIT Lincoln Laboratory), and Eytan Modiano (MIT)
Abstract: We study the problem of learning channel statistics in order to efficiently schedule transmissions in wireless networks subject to interference constraints. In particular, we focus on the primary interference model which requires that at any time the set of activated links be a matching in the corresponding graph. We propose a distributable algorithm that forms greedy matchings in the graph in order to learn the channels’ transmission rates, while simultaneously exploiting previous observations to obtain high throughput. Comparison to the offline solution shows our algorithm to have good performance that scales well with the number of links in the network. We then turn our attention to the stochastic setting where packets randomly arrive to the network and await transmission in queues at the nodes. We develop a queue-length-based scheduling policy that uses the channel learning algorithm as a component. We analyze our method in time varying environments and show that it achieves the same stability region as that of a greedy matching policy with full channel knowledge (i.e., half of the full stability region).
Hyeryung Jang (King's College London) and HyungSeok Song and Yung Yi (KAIST)
Abstract: In this paper, we consider the problem of recovering a graph that represents the statistical data dependency among nodes for a set of data samples generated by nodes, which provides the basic structure to perform an inference task, such as MAP (maximum a posteriori). This problem is referred to as structure learning. When nodes are spatially separated in different locations, running an inference algorithm requires a non-negligible amount of message passing, incurring some communication cost. We inevitably have the trade-off between the accuracy of structure learning and the cost we need to pay to perform a given message-passing based inference task because the learnt edge structures of data dependency and physical connectivity graph are often highly different. In this paper, we formalize this trade-off in an optimization problem which outputs the data dependency graph that jointly considers learning accuracy and message-passing cost. We focus on a distributed MAP as the target inference task due to its popularity, and consider two different implementations, ASYNC-MAP and SYNC-MAP that have different message-passing mechanisms and thus different cost structures. In ASYNC-MAP, we propose a polynomial time learning algorithm that is optimal, motivated by the problem of finding a maximum weight spanning tree. In SYNC-MAP, we first prove that it is NP-hard and propose a greedy heuristic. For both implementations, we then quantify how the probability that the resulting data graphs from those learning algorithms differ from the ideal data graph decays as the number of data samples grows, using the large deviation principle, where the decaying rate is characterized by some topological structures of both original data dependency and physical connectivity graphs as well as the degree of the trade-off, which provides some guideline on how many samples are necessary to obtain a certain learning accuracy. We validate our theoretical findings through extensive simulations, which confirm that it has a good match.
Qian Ma (The Chinese University of Hong Kong), Biying Shou (City University of Hong Kong), Jianwei Huang (The Chinese University of Hong Kong), and Tamer Başar (University of Illinois at Urbana-Champaign)
Abstract: For Internet-based services, users’ quality of service (QoS) depends on not only the available resource (capacity) but also the number of users who use the resource simultaneously (e.g., congestion effect). When a new Internet-based service provider first enters the market, there can be uncertainties regarding both the capacity and congestion, and hence the uncertainty of QoS. In this paper, we consider a participation-dependent social learning over the QoS through users’ online reviews, where the QoS changes with the number of review participants. We study how such a learning process affects the provider’s dynamic pricing strategy. With a simple two-period model, we analyze the strategic interactions between the provider and the users, and characterize the provider’s optimal two-period dynamic pricing policy. Our results show that when the capacity is small or the users’ prior QoS belief is high, the provider will choose a higher introductory price in the first period (than the price in the second period). This is in sharp contrast with the common practice of setting a lower introductory price to attract users (when congestion is not an issue). Furthermore, the learning process is beneficial to the provider with a large capacity.
Room: E-VI Mong Auditorium (Room 180)
Abstract: Wi-Fi Goes to Town is a Wi-Fi based roadside hotspot network designed to operate at vehicular speeds and picocell (meter-sized) cells. Wi-Fi Goes to Town APs make delivery decisions to the vehicular clients they serve at millisecond-level granularities, exploiting very fine-grained path diversity in roadside networks. In order to accomplish this, we introduce new buffer management algorithms that integrate with 802.11 frame aggregation to allow participating APs to manage each others’ queues, rapidly quenching each others’ transmissions and flushing each others’ queues.
Session Chair: Lei Yang (UNR)
Room: E-VI Mong Auditorium (Room 180)
Wei Bao, Dong Yuan, Keqi Shi, Weiyu Ju, and Albert Y. Zomaya (University of Sydney)
Abstract: Caching is essential for data-intensive mobile applications to reduce duplicated data transmission. In this paper, we study the optimal probabilistic caching and re-caching policies in mobile networks as the file popularity may change over time. We propose a Probabilistic File Re-caching (PFR) policy to match the updated popularity. Through PFR, files cached (resp. not cached) are probabilistically opted out (resp. opted in). PFR is with substantial advantages: (1) PFR can handle a huge combinatorial number of all possible situations. (2) The expected number of replaced files is minimized. (3) The computational complexity of PFR is low. Second, we study a utility optimization problem in the mobile network, in order to optimally decide the probability that each file is cached and whether a file should be downloaded from a peer device or directly from the server. Even though the optimization problem is non-convex programming in nature, we devise a computationally efficient Optimal Probabilistic Caching and Requesting (OPCR) policy, through decoupling the decision variables, to derive a globally optimal solution. Finally, we develop a real-world prototype and conduct trace-driven simulations to validate and evaluate our proposed PFR and OPCR policies.
Kai Han, Chaoting Xu, and Fei Gui (School of Computer Science and Technology, University of Science and Technology of China), Shaojie Tang (University of Texas at Dallas), He Huang (Soochow University), and Jun Luo (Nanyang Technological University)
Abstract: Viral marketing through online social networks (OSNs) has aroused great interests in the literature. However, the fundamental problem of how to optimize the “pure gravy” of a marketing strategy through influence propagation in OSNs still remains largely open. In this paper, we consider a practical setting where the “seed nodes” in an OSN can only be probabilistically activated by the product discounts allocated to them, and make the first attempt to seek a discount allocation strategy to maximize the expected difference of profit and cost (i.e., revenue) of the strategy. We show that our problem is much harder than the conventional influence maximization issues investigated by previous work, as it can be formulated as a non-monotone and non-submodular optimization problem. To address our problem, we propose a novel “surrogate optimization” approach as well as two randomized algorithms which can find approximation solutions with constant performance ratios with high probability. We evaluate the performance of our approach using real social networks. The extensive experimental results demonstrate that our proposed approach significantly outperforms previous work both on the revenue and on the running time.
Zhangyu Guan, Lorenzo Bertizzolo, Emrecan Demirors, and Tommaso Melodia (Northeastern University)
Abstract: This article investigates the basic design principles for a new Wireless Network Operating System (WNOS), a radically different approach to software-defined networking (SDN) for infrastructure-less wireless networks. Departing from well-understood approaches inspired by OpenFlow, WNOS provides the network designer with an abstraction hiding (i) the lower-level details of the wireless protocol stack and (ii) the distributed nature of the network operations. Based on this abstract representation, the WNOS takes network control programs written on a centralized, high-level view of the network and automatically generates distributed cross-layer control programs based on distributed optimization theory that are executed by each individual node on an abstract representation of the radio hardware.
We first discuss the main architectural principles of WNOS. Then, we discuss a new approach to automatically generate solution algorithms for each of the resulting subproblems in an automated fashion. Finally, we illustrate a prototype implementation of WNOS on software-defined radio devices and test its effectiveness by considering specific cross-layer control problems. Experimental results indicate that, based on the automatically generated distributed control programs, WNOS achieves 18%, 56% and 80.4% utility gain in networks with low, medium and high levels of interference; maybe more importantly, we illustrate how the global network behavior can be controlled by modifying a few lines of code on a centralized abstraction.
Jinbei Zhang (The Chinese University of Hong Kong), Weijie Wu (Future Network Theory Lab, 2012 Labs, Huawei Technologies Co., Ltd), and John C.S. Lui (The Chinese University of Hong Kong)
Abstract: Network function virtualization (NFV) can significantly reduce the operation cost and speed up the deployment for network services to markets. Under NFV, a network service is composed by a chain of ordered virtual functions, or we call a "network function chain." A fundamental question is when given a number of network function chains, on which servers should we place these functions and how should we form a chain on these functions? This is challenging due to the intricate dependency relationship of functions and the intrinsic complex nature of the optimization. In this paper, we formulate the function placement and chaining problem as an integer optimization, where each variable is an indicator whether one service chain can be deployed on a configuration, and each configuration is one possible function placement and chaining for a service chain. While this problem is generally NP-hard, we show that it can be mapped to an exponential number of min-cost flow problems. Instead of solving all the min-cost problems, one can select a small number of mapped min-cost problems, which are likely to have a low cost. To achieve this, we relax the integer problem into a fractional linear problem, and theoretically prove that the fractional solutions possess some desirable properties, i.e., the number and the utilization of selected configurations can be upper and lower bounded, respectively. Based on such properties, we determine some "good configurations selected from the fractional solution and determine the mapped min-cost flow problem, and this helps us to develop efficient algorithms for network function placement and chaining. Via extensive simulations, we show that our algorithms significantly outperform existing heuristic or greedy algorithms and achieve near-optimal performance.
Session Chair: Peter Reiher (UCLA)
Room: E-VI Mong Auditorium (Room 180)
Lei Yang (University of Nevada, Reno), Mengyuan Zhang and Shibo He (Zhejiang University), Ming Li (University of Nevada, Reno), and Junshan Zhang (Arizona State University)
Abstract: We develop an auction framework for privacy-preserving data aggregation in mobile crowdsensing, where the platform plays the role as an auctioneer to recruit workers for a sensing task. In this framework, the workers are allowed to report privacy-preserving versions of their data to protect their data privacy; and the platform selects workers based on their sensing capabilities, which aims to address the drawbacks of game-theoretic models that cannot ensure the accuracy level of the aggregated result, due to the existence of multiple Nash Equilibria. Observe that in this auction based framework, there exists externalities among workers’ data privacy, because the data privacy of each worker depends on both her injected noise and the total noise in the aggregated result that is intimately related to which workers are selected to fulfill the task. To achieve a desirable accuracy level of the data aggregation in a cost-effective manner, we explicitly characterize the externalities, i.e., the impact of the noise added by each worker on both the data privacy and the accuracy of the aggregated result. Further, we explore the problem structure, characterize the hidden monotonicity property of the problem, and determine the critical bid of workers, which makes it possible to design a truthful, individually rational and computationally efficient incentive mechanism. The proposed incentive mechanism can recruit a set of workers to approximately minimize the cost of purchasing private sensing data from workers subject to the accuracy requirement of the aggregated result. We validate the proposed scheme through theoretical analysis as well as extensive simulations.
Yan Meng, Zichang Wang, Wei Zhang, Peilin Wu, and Haojin Zhu (Shanghai Jiao Tong University), Xiaohui Liang (University of Massachusetts Boston), and Yao Liu (University of South Florida)
Abstract: With the prevalent of smart devices and home automations, voice command has become a popular User Interface (UI) channel in the IoT environment. Although Voice Control System (VCS) has the advantages of great convenience, it is extremely vulnerable to the spoofing attack (e.g., replay attack, hidden/inaudible command attack) due to its broadcast nature. In this study, we present WiVo, a device-free voice liveness detection system based on the prevalent wireless signals generated by IoT devices without any additional devices or sensors carried by the users. The basic motivation of WiVo is to distinguish the authentic voice command from a spoofed one via its corresponding mouth motions, which can be captured and recognized by wireless signals. To achieve this goal, WiVo builds a theoretical model to characterize the correlation between wireless signal dynamics and the user’s voice syllables. WiVo extracts the unique features from both voice and wireless signals, and then calculates the consistency between these different types of signals in order to determine whether the voice command is generated by the authentic user of VCS or an adversary. To evaluate the effectiveness of WiVo, we build a testbed based on Samsung SmartThings framework and include WiVo as a new application, which is expected to significantly enhance the security of the existing VCS. We have evaluated WiVo with 6 participants and different voice commands. Experimental evaluation results demonstrate that WiVo achieves the overall 99% detection rate with 1% false accept rate and incurs a low latency overhead.
Boris Aronov (New York University), Alon Efrat and Ming Li (University of Arizona), Jie Gao and Joseph S. B. Mitchell (Stony Brook University), Valentin Polishchuk (Link\"oping University), Boyang Wang (EECS Dept, University of Cincinnati), Hanyu Quan (Xidian University), and Jiaxin Ding (Stony Brook University)
Abstract: With the ubiquitous adoption of smartphones and mobile devices, it is now common practice for one’s location to be sensed, collected and likely shared through social platforms. While such data can be helpful for many applications, users start to be aware of the privacy issue in handling location and trajectory data. While some users may voluntarily share their location information (e.g., for receiving location-based services, or for crowdsourcing systems), their location information may lead to information leaks about the whereabouts of other users, through the co-location of events when two users are at the same location at the same time and other side information, such as upper bounds of movement speed. It is therefore crucial to understand how much information one can derive about other’s positions through the co-location of events and occasional GPS location leaks of some of the users.
In this paper we formulate the problem of inferring locations of mobile agents, present theoretically-proven bounds on the amount of information that could be leaked in this manner, study their geometric nature, and present algorithms matching these bounds. We will show that even if a very weak set of assumptions is made on trajectories’ patterns, and users are not obliged to follow any ‘reasonable’ patterns, one could infer very accurate estimation of users’ locations even if they opt not to share them. Furthermore, this information could be obtained using almost linear-time algorithms, suggesting the practicality of the method even for huge volumes of data.
Guocheng Liao (The Chinese University of Hong Kong), Xu Chen, and Jianwei Huang (The Chinese University of Hong Kong)
Abstract: We study a privacy-preserving data collection problem, by jointly considering data reporters’ data correlation and social relationship. A data collector gathers data from individuals to perform a certain analysis with a privacy-preserving mechanism. Due to data correlation, the data analysis based on the reported data can cause privacy leakage to other individuals (even if they do not report data). The data reporters will take such a privacy threat into account, owing to the social relationship among individuals. This motivates us to formulate a two-stage Stackelberg game: In Stage I, the data collector selects some individuals as data reporters and designs a privacy-preserving mechanism for a sum query analysis. In Stage II, the selected data reporters contribute their data with possible perturbations (through adding noise). By analyzing the data reporters’ equilibrium decisions in Stage II, we show that given any fixed reporter set, only one data reporter with the most significant joint consideration of the social relationship and data correlation may add noise to his reported data. The rest of the data reporters will truthfully report their data. In Stage I, we derive the data collector’s optimal privacy-preserving mechanism and propose an efficient algorithm to select the data reporters. We conclude that the data collector should jointly capture the impact of data correlation and social relation to ensure all data reporters truthfully reporting their data. We conduct extensive simulations based on random network and real-world social data to investigate the impact of data correlation and social network on the system. We find that the availability of social network information is more critical to the data collector compared with data correlation information.