|
Abstracts
of Selected Ph.D. Theses in the Area of Mobile Computing
Awarded in 1998
Challenges to Reliable Data Transport over Heterogeneous
Wireless Networks
Hari Balakrishnan
University of California at Berkeley, USA
August 1998
The
full dissertation is available here
The Transmission Control Protocol (TCP) is the de facto
standard for reliable data transmission in the Internet
today. While TCP has been tuned to work well over traditional
wired networks, its performance over wireless networks
is much worse. This performance degradation results
from several effects:
-
The preponderance of channel error-induced packet
losses compared to congestion-induced losses.
-
Asymmetric effects and latency variation due to
adverse interactions between media-access protocols
and TCP.
-
Small TCP transmission windows due to low wireless
channel bandwidths.
While TCP adapts well to network congestion, it does
not adequately handle the vagaries of wireless media.
In this thesis, we address these challenges in detail
and design solutions to them. These solutions incorporate
link-layer techniques as well as enhancements to TCP
at the sender and receiver.
The adverse effects of wireless bit-errors on TCP performance
arise primarily because TCP misinterprets wireless losses
as being due to congestion. We present the design and
implementation of a novel protocol, called the Berkeley
Snoop Protocol, that exploits cross-layer protocol optimizations
to improve performance. In addition, we design a mechanism
called Explicit Loss Notification that can successfully
distinguish between congestion and channel error-induced
packet losses to substantially enhance end-to-end performance.
These mechanisms have been demonstrated to provide performance
improvements between 100% and 2000% across a range of
bit-error rates in various cellular topologies in a
Lucent WaveLAN-based wireless LAN, for data transfer
to and from mobile hosts and when wireless transit links
are present. Asymmetric networks pose a challenge to
feedback-based protocols like TCP, because the characteristics
of the reverse path used for acknowledgments (ACKs)
have the potential to seriously affect forward performance.
We classify asymmetry with respect to TCP performance
into several categories, such as bandwidth asymmetry,
latency asymmetry, loss-rate asymmetry, etc. We design
several end-to-end and transport-aware link-layer solutions,
such as ACK filtering, TCP sender adaptation and ACK
reconstruction to obtain significantly better performance
over such networks. These techniques lead to performance
improvements for Internet access over packet radio networks
such as Metricom's Ricochet network and for bandwidth-asymmetric
networks such as Hybrid's wireless cable network.
Low channel bandwidths are common in many wireless
networks as well as on many Internet paths today. This,
and the short connections common in many Web transfers,
lead to TCP windows being very small. This greatly affects
its data-driven loss recovery mechanism and causes a
large number of timeouts. In addition to implementing
and evaluating the performance of TCP Selective Acknowledgments,
we analyze data from packet-level traces of the Web
server of the Atlanta Olympic Games to motivate enhancements
to TCP's loss recovery mechanism. Our enhancements lead
to an Enhanced Recovery scheme for TCP that greatly
reduces the number of timeouts when windows are small.
In this research, we learn some general lessons about
protocol design for heterogeneous networks. Specifically,
we demonstrate the significant performance benefits
of cross-layer protocol optimizations, of soft-state
agents in the network infrastructure, and the advantages
of data-driven loss recovery over timer-driven ones
for better performance.
Today, wireless data networks are not widely deployed
in the Internet despite the availability of a wide range
of wireless technologies: one of the main reasons for
this is their poor price-to-performance ratio for reliable
data transfers. We believe (and hope!) that by significantly
improving end-to-end TCP performance by up to an order
of magnitude in some cases, our solutions will accelerate
the widespread deployment and adoption of wireless data
networks and make them an integral part of the Internet
infrastructure of the future.
Information Dissemination Through Broadcast Delivery
Chi-Jiun Su
Electrical Engineering Department
Polytechnic University,Brooklyn, New York, USA
January 1998
Broadcast delivery is becoming a method of choice for
the distribution of information to a large user population
in many new applications. One of the major issues in
broadcast delivery is scheduling of broadcast data such
that the access delay of user's requests is minimized.
The other complementary problem in push-based delivery
is user's cache management in order to minimize the
mismatch between the server's schedule and user's access
pattern.
We formulate the problem of broadcast scheduling as
a Markov Decision Process for both push-based and pull-based
delivery and properties of the optimal scheduling policy
are identified. We specify a class of near-optimal scheduling
policies that incorporates some of the characteristics
of the optimal policy and they turn out to yield good
performance for both push- and pull-based delivery.
The results are readily extended for a system with multiple
broadcast channels and for information items of unequal
lengths. We demonstrate by a numerical study that as
the request generation rate increases, the achievable
performance of the pull- and push-based systems becomes
almost identical, and prove it for the case with equal
access probabilities.
For push/pull hybrid delivery, we provide server's
database partitioning, bandwidth allocation and scheduling
algorithms and extensive numerical experiments are done
to investigate their performance. The results are also
extended for information items of unequal length.
We identify the optimal user's cache management strategy
for push-based delivery, that minimize the mean response
time of user's requests. Computational issues are also
discussed and cache update policies with limited look-ahead
are given as implementable approximation to the optimal
policy. We present some interesting special cases for
which limited look-ahead policies are optimal. It is
also shown that the same formulation can be used when
the objective is to minimize the number of deadline
misses.
Finally, we consider the problem of joint broadcast
scheduling and cache management for push-based delivery
and propose an approach which yields up to 40% performance
improvement over traditional non-joint schemes when
prefetching is done at user's side. A two-level scheduling
algorithm is given which remedies some pathological
cases and performs slightly better than one-level schemes.
A Proxy Based Filtering Mechanism For the Mobile
Environment
Bruce Zenel
Columbia University, NY, USA
February 1998
Host mobility complicates the standard networking model
in unexpected ways. It increases network heterogeneity,
causing difficulty within applications that expect a
high or constant amount of network bandwidth. Mobility
increases cost when a user switches from a relatively
inexpensive high bandwidth network to a low bandwidth
pay-by-call or pay-by-byte network. Mobility may also
reduce security when a user moves from a "safe" network
onto a foreign one.
This situation can be improved by introducing a proxy
between the mobile client and its peer. The purpose
of this intermediary is to process data moving between
the client and peer. The type of processing performed
depends on the desired result: the proxy can hide network
heterogeneity via data filtering, reduce cost by using
compression, and increase security through the use of
secure protocols between the client and proxy. This
dissertation presents a general purpose proxy system
and how it has been applied to the mobile environment.
The overall architecture is described, examples of its
use given, and a study on its feasibility and performance
is presented.
On the Broadcast and Clustering Problems in Peer-to-Peer
Networks
Stefano Basagni
Università di Milano, Italy
May 1998
The
full dissertation is available here
In this thesis, we study two problems in the field of
wireless mobile computing, namely, the broadcast and
the clustering problems for peer-to-peer networks, i.e.,
wireless networks in which all nodes can move (these
networks are also often termed ad hoc networks, or multi-hop
networks).
The first problem we are interested in is about algorithms
for implementing broadcast primitives in peer-to-peer
networks. Informally, broadcast is the task initiated
by a single node, called the source, whose goal is to
send a single message m to all the nodes in the network.
This problem has been widely investigated, and several
algorithms have been proposed for peer-to-peer networks.
Most of them use probabilistic techniques to deliver
m to all the nodes (randomized algorithms), or assume
that the topology of the network is known to each node
(centralized algorithms). Both these assumptions give
efficient broadcast protocols, but lead to algorithms
that are not suitable when time-dependent applications
are run over the network, or when the topology of the
network is uncertain, as in a mobile environment. Thus,
we have investigated solutions to the broadcast problem
that overcome these limitations. The protocols we present
here have the following characteristics. They are deterministic,
so that an a priori known bound on the maximum delay
of message transmission can be easily determined. They
are delivery-guaranteed, in that the correct delivery
of a message is always guaranteed within a bounded amount
of time, and they are completely distributed, i.e.,
we assume that the nodes in the network do not have
an a priori knowledge of the entire network topology
and not even of their neighbors. The bounds induced
by the algorithms are shown to be optimal for networks
with certain topologies, and they partially bridge the
gap between a previously known logarithmic lower bound
for broadcast protocols with the stated required properties
and a trivial linear upper bound.
The other problem that we have investigated is the
clustering problem for peer-to-peer networks. Clustering
a network means to partition its nodes in clusters so
that certain desirable properties are guaranteed. Examples
of issues addresses by clustering include the need to
control the spatial reuse of the channel (e.g., in terms
of frequency and/or time), and the need to reduce routing
and control information overhead. Recent research in
the field shows that an effective solution to this problem
can be obtained through the organization of nodes in
clusters. In this dissertation we characterize the clustering
problem as a maximum weight independent set (MWIS) problem.
Once assigned a weight to each node of the network,
we look for clusterheads so that the set of clusterheads
is an independent set with the maximum weight. The weights
are introduced to reflect how a node is to be preferred
for the role of clusterhead. We show that a simple variation
of the greedy algorithm for the MWIS problem leads to
a clustering algorithm with provable properties on the
performance and to some characteristics of optimality.
We finally give a completely distributed implementation
of the centralized clustering algorithm and, by using
the techniques developed for the broadcast problem,
we obtain efficient (deterministic) bounds on the distributed
clustering problem for peer-to-peer networks.
Mobile Data Access
Brian Noble
Carnegie Mellon University, Pittsburgh, PA
May 1998
The
full dissertation is available here
Mobile devices and wireless networks are becoming more
powerful and affordable, leading to the growing importance
of mobile data access. Unfortunately, mobile environments
are inherently turbulent; the resources available to
mobile clients change dramatically and unpredictably
over time.
This dissertation puts forth the claim that clients
must adapt their behavior to such turbulence by trading
quality of fetched data for performance in fetching
it. Such adaptation is best provided by application-aware
adaptation --- a collaboration between the operating
system and its applications. In this collaboration,
the system is responsible for providing the mechanisms
for adaptation, while applications are free to set adaptive
policies.
The dissertation next describes the design and implementation
of Odyssey, a platform for mobile data access. This
discussion focuses on the mechanisms provided by the
system, the architecture comprising those mechanisms,
and the application programming interface from which
applications construct adaptive policies. The dissertation
then presents three applications that make use of these
mechanisms: a video player, a web browser, and a speech
recognition system. These applications adapt their behavior
to changes in available network bandwidth.
There are three questions to ask of this prototype
and its applications. First, how agile can the prototype
be in the face of changing network bandwidth? Second,
does adaptation to substantial changes in bandwidth
provide benefit to individual applications? Third, is
the collaboration between the system and applications
necessary when several applications are run concurrently?
These questions cannot be answered simply by subjecting
the prototype to a real wireless network. Such networks
provide neither stable nor repeatable performance, and
hence are not suitable for comparative evaluations.
Instead, the prototype is evaluated using trace modulation.
This technique allows one to capture the performance
characteristics of a wireless network over a short period
of time, and reliably recreate that performance in an
otherwise live system. Evaluating the system under modulation
shows that Odyssey has good agility with respect to
changes in network bandwidth, that individual applications
can benefit from adaptive strategies, and that the system's
involvement in adaptation is crucial for concurrent
applications.
Energy-conserving Protocols for Wireless Data
Networks
Jason Keith Redi
Boston University, MA, USA
March 1998
One of the most important factors which will influence
the future success of ubiquitous mobile computing is
the convenience and usefulness of the mobile devices.
Major defining factors of the utility of these nodes
will be the lifetime, size and weight of the power source.
As active battery research does not predict a substantial
increase in the energy capacity of today's batteries,
it is important to minimize the power required by the
mobile device. Among existing protocols for wireless
packet reception, classical access and automatic repeat
request (ARQ) protocols are either not energy conserving,
or lead to unacceptable delays. In this research, we
consider methods to substantially reduce the amount
of power necessary for reception of packets at the mobile
devices.
We first assume an error-free channel between the base
station and the mobile node. We introduce three protocols
which conserve energy by reducing the amount of time
that the mobile receiver is in the ``ON'' state, while
meeting the application delay constraints. We analyze
the protocols and consider their performance under a
variety of traffic loads. We next address a few of their
individual drawbacks through enhancements such as dynamic
adaptation of the node wakeup schedules. Since one of
the defining features of the wireless environment is
the error-prone channel, we follow with protocols designed
to conserve power in the multi-path fading environment.
As uplink transmissions can typically require twice
as much energy as reception, it is particularly important
to reduce the number of acknowledgments sent by the
mobile node's automatic repeat request (ARQ) protocol.
We propose four energy-conserving alternatives to the
classical protocols and show through analysis and simulation
that the proposed protocols allow a substantial reduction
in the energy used for acknowledgments in the uplink,
without unreasonably increasing the average packet delay,
or substantially reducing the throughput capacity. Finally,
we adapt these protocols into a dynamic, asymmetric
version where the complexity of the protocol is off-loaded
to the base station in order to more easily support
multiple traffic types, reduce the requirements of the
mobile node, and more efficiently handle changing channel
conditions in the presence of multipath fading.
On Optimizing Performance in Mobile Packet Radio
Networks
Madhavi W. Subbarao
Department of Electrical and Computer Engineering
The Johns Hopkins University, Baltimore, MD, USA
March 1998
The
full dissertation is available here
Mobile packet radio networks are a class of mobile wireless
systems. They consist of a collection of movable terminals
distributed over a wide geographical area and the radio
links which connect them. In the design of such a network,
there are many physical layer, data link layer, and
network layer components that must be determined, such
as the signaling scheme, error-correction coding, transmission
range, routing strategy, and nodal transmission probability.
The best choice for these different design parameters
is not straightforward since their effects on network
performance are interdependent. Two important concerns
for efficient network performance are to find the optimum
transmission range and optimum code rate to be used
by each node. The optimum combination is a tradeoff
between the distance covered by a transmission and the
amount of data conveyed by the transmission. To capture
this tradeoff, we introduce a new performance measure,
information efficiency, which can be viewed as a measure
of spectral efficiency for wireless networks. We analyze
two types of spread-spectrum networks using the slotted
ALOHA channel access protocol, a frequency-hopping network
and a direct-sequence network. By maximizing the information
efficiency and local throughput, we find the jointly
optimum code rate, transmission range, and slotted ALOHA
transmission probability for each node in the network.
We show that using trellis-coded modulation as the coding
scheme leads to the highest information efficiency in
a direct-sequence packet radio network. Finally, we
focus on the benefits of conserving power in these systems
and develop a power efficient routing algorithm for
packet radio networks.
Translucent Cache Management for Mobile Computing
Maria R. Ebling
Carnegie Mellon University, USA
March 1998
The
full dissertation is available here
Recently, distributed files systems have aggressively
exploited caching to provide high availability to their
clients. Such systems allow users to access and modify
cached data even when clients become disconnected (e.g.
a laptop removed from its docking station) or weakly
connected (e.g. a laptop connected via a modem) to servers.
In doing so, these file systems violate one of the key
properties of caching: transparency.
When disconnected or weakly connected, these systems
cannot always maintain the illusion of connectivity.
For instance, the system may not be able to service
a cache miss or may only be able to do so with a lengthy
delay. When the illusion of connectivity breaks down,
the usability of the system is undermined. Early designers
of these systems either did not recognize or failed
to appreciate the extent to which users would be affected.
After extensive experience with the Coda file system,
however, we have witnessed the arduous learning curve
experienced by new Coda users. We have also observed
occasional, but serious, confusion caused by modal behaviors
introduced to cope with the range of network connectivity
encountered by mobile clients.
This dissertation describes the design, implementation,
and evaluation of the CodaConsole, a graphical interface
that makes caching translucent to users of such systems.
To make caching translucent, the interface exposes critical
aspects of caching to support availability, while hiding
noncritical details to preserve usability. For example,
the system informs users about the availability of their
tasks, but does not bother them with minutia of cache
consistency. The interface also offers opportunities
for users to control where network resources are spent.
To evaluate the CodaConsole, I performed a usability
test. This study showed that users were able to learn
how to use the interface in about one hour. They were
able to operate the interface with little difficulty.
They were also able to understand problems indicated
by the interface and then determine what actions might
resolve those problems. Perhaps the most surprising
result, however, was the fact that novice Coda users
performed almost as well as expert Coda users. This
interface successfully makes caching, in the presence
of disconnected or weakly connected operation, translucent.
Context-Aware and Location Systems
Giles John Nelson
University of Cambridge, United Kingdom
January 1998
The
full dissertation is available here
Computing systems are becoming increasingly mobile,
with users interacting with many networked devices,
both fixed and portable, over the course of a day. This
has lead to the emergence of a new paradigm of computing,
the context-aware system. A user's context can be described
as their relationship to computing devices, colleagues
and the surrounding physical environment. Applications
using such a system are made aware of changes in the
physical and computing environment, and adjust themselves
accordingly. For example, with knowledge of the locations
of people and equipment, a suitable nearby display can
be determined to deliver a message to a mobile user.
Context-aware systems are currently ad-hoc in nature,
typically developed around a particular technology and
oriented towards supporting a particular type of application.
Current location systems are similarly ad-hoc with no
notion of abstraction above a particular locating technology.
Furthermore, little analysis has been conducted into
the application requirements of a location system. This
general lack of systems design makes current approaches
difficult to extend and generalise.
This thesis proposes a systems architecture for the
support of context-aware applications. It is asserted
that such an architecture requires three main areas
of capability. Firstly, a context-aware system needs
to monitor the physical environment and make this information
available in a uniform way. Secondly, location information
requires higher-level management together with a representation
of static parts of the physical domain. Further, support
is required to enable complex, asynchronous monitoring
of spatial areas. Thirdly, application support is required
to access distributed sensor and location services,
so facilitating the simple specification of context-aware
scenarios.
Roam: A Scalable Replication System for Mobile
and Distributed Computing
David Howard Ratner
UCLA, Los Angeles, CA, USA
January 1998
The
full dissertation is available here
Mobile computing is rapidly becoming a way of life.
Users carry their lap- tops, PDAs, and other portable
devices with them almost constantly, whether their mobility
takes them across town or across the world. Recent hardware
innovations and improvements in chip technology have
made mobile computing truly feasible.
Unfortunately, unlike the hardware industry, much of
today's system software is not ``mobile-ready.'' Such
is the case with the replication service. Nomadic users
require replication to store copies of critical data
on their mobile machines, since disconnected or poorly
connected machines must rely primarily on local resources.
However, the existing replication services are designed
for stationary environments, and do not provide mobile
users with the capabilities they require. Replication
in mobile environments requires fundamentally different
solutions than those previously proposed, because nomadicity
presents a fundamentally new and different computing
paradigm. Mobile users require several key features
from the replication system: the ability for direct
synchronization between any two replicas, the capability
for widespread scaling and large numbers of replicas,
and detailed control over what files reside on their
local (mobile) replica.
Today's replication systems do not and cannot provide
these features. Therefore, mobile users must adapt their
behavior to match the service provided to them, complicating
their ability to work while mobile and effectively hindering
mobility by costing the user additional time, money,
and system-management effort simply to operate in a
nomadic mode.
Here we present Roam, a replication system designed
to satisfy the requirements of the mobile user. Roam
is based on the Ward Model, a replication architecture
specifically aimed at mobile environments. Together
with the Ward Model and new, distributed algorithms
such as improved garbage collection techniques and version
vector management, Roam provides a scalable replication
solution for the mobile user. We describe not only the
design and motivation of such a system, but its implementation
and performance as well.
|