%{search_type} search results

34 catalog results

RSS feed for this result
Book
13 p.
Recently there has been much interest in combining the speed of layer-2 switching with the features of layer-3 routing. This has been prompted by numerous proposals, including: IP Switching, Tag Switching, ARIS, CSR, and IP over ATM. In this paper, we study IP Switching and evaluate the performance claims made by Newman et al. In particular, using nine network traces, we study how well IP Switching performs with traffic found in campus, corporate, and Internet Service Provider (ISP) environments. Our main finding is that IP Switching will lead to a high proportion of datagrams that are switched; over 75% in all of the environments we studied. We also investigate the effects that different flow classifiers and various timer values have on performance, and note that some choices can result in a large VC space requirement. Finally, we present recommendations for the flow classifier and timer values, as a function of the VC space of the switch and the network environment being served.
SAL3 (off-campus storage), Special Collections
Book
15 p.
Architectures based on a non-blocking fabric, such as a crosspoint switch, are attractive for use in high-speed LAN switches, ATM switches and IP routers. These fabrics, coupled with memory bandwidth limitations, dictate that queues be placed at the input of the switch. But it is well known that input-queueing can lead to low throughput, and does not allow the control of latency through the switch. This is in contrast to output-queueing, which maximizes throughput, and permits the accurate control of packet latency through scheduling. We ask the question: Can a switch with combined input and output queueing be designed to behave identically to an output-queued switch? In this paper, we prove that if the switch uses virtual output queueing, and has an internal speedup of just four, it is possible for it to behave identically to an output queued switch, regardless of the nature of the arriving traffic. Our proof is based on a novel scheduling algorithm, known as Most Urgent Cell First. This result makes possible switches that perform as if they were output-queued, yet use memories that run more slowly.
SAL3 (off-campus storage), Special Collections
Book
14 p.
This paper presents the design of the scheduler for an M x N input-queued switch. It is assumed that each input maintains a single queue for arriving multicast cells and that only the cell at the head of line (HOL) can be observed and scheduled at one time. The scheduler is required to be work-conserving, which means that no output port may be idle as long as there is an input cell destined to it. Furthermore, the scheduler is required to be fair, which means that no input cell may be held at HOL for more than M cell times (M is the number of input ports). The aim is to find a work-conserving, fair policy that delivers maximum throughput and minimizes input queue latency.
When a scheduling policy decides which cells to schedule, contention may require that it leave a residue of cells to be scheduled in the next cell time. The selection of where to place the residue uniquely defines the scheduling policy. It is demonstrated that a policy which always concentrates the residue, subject to our fairness constraint, always outperforms all other policies. We present one such policy, called TATRA, and analyze it geometrically. We also present a heuristic round-robin policy called mRRM that is simple to implement in hardware, fair, and performs quite well when compared to a concentrating algorithm.
SAL3 (off-campus storage), Special Collections
Book
xiv, 419 p.
SAL1&2 (on-campus shelving), SAL3 (off-campus storage), Special Collections
Book
xv, 95 p.
SAL1&2 (on-campus shelving), SAL3 (off-campus storage), Special Collections
Book
xiv, 100 p.
SAL1&2 (on-campus shelving), SAL3 (off-campus storage), Special Collections
Book
xiii, 104 leaves bound.
SAL3 (off-campus storage), Special Collections
Book
xiv, 123 p.
SAL3 (off-campus storage), Special Collections
Book
xv, 152 p.
SAL3 (off-campus storage), Special Collections
Book
xii, 121 leaves, bound.
SAL3 (off-campus storage), Special Collections
Book
xvi, 160 p.
SAL3 (off-campus storage), Special Collections
Book
1 online resource.
We present NUMFabric, a novel transport design that provides flexible and fast bandwidth allocation control. NUMFabric is flexible: it enables operators to specify how bandwidth is allocated amongst contending flows to optimize for different service-level objectives such as weighted fairness, minimizing flow completion times, multipath resource pooling, prioritized bandwidth functions, etc. NUMFabric is also very fast: it converges to the specified allocation 2.3 times faster than prior schemes. Underlying NUMFabric is a novel distributed algorithm for solving network utility maximization problems that exploits weighted fair queueing packet scheduling in the network to converge quickly. We evaluate NUMFabric using realistic data center topologies and highly dynamic workloads and show that it is able to provide flexibility and fast convergence in such stressful environments.
Book
1 online resource.
Personalization, the practice of dynamically tailoring functionality for the needs and wants of each user, is a natural step in the evolution of many computing systems (such as PCs, search engines, and recommendation systems). The merits of personalization are obvious: Users get services more quickly and accurately, context and information gets more relevant, and the interaction between users and the personalized system becomes more amicable. This thesis explores personalization as a natural step of the network's evolution, along with its associated benefits and challenges. At first, personalization might seem irrelevant for networks---after all, the network has a single, unambiguous and objective task to complete: Carry packets from one side to the other as quickly as possible. But in practice, networks are more complicated, continuously taking decisions about which traffic gets priority, how different applications are being charged, and to which WiFi network a user should connect and with what password. Making these decisions while ignoring users often produces suboptimal results. For example, the net neutrality debate highlights the dangers in dampening user choice and innovation when ISPs and Content Providers decide which applications are prioritized or zero rated. Similarly, our day-to-day interaction with several user-agnostic networks results in a fragmented user experience: We use different authentication methods and credentials for each network; reachability to desired resources is subject to the network to which we are connected; and we have very little, if any, control over policies such as firewalls and QoS settings. This thesis presents a generic network architecture for personalizing network functionality. Network operators structure and expose functionality as high-level services (e.g., a fast lane or a zero-rated lane), and then let users tailor these services by expressing their own preferences. A critical piece for personalized network services is how do users communicate their preferences to the network. Network cookies, a generic mapping abstraction, provide this user to network interface in a way that is simple yet expressive, respects the tussle between different stakeholders (e.g., users, ISPs, content providers and policymakers) in terms of security, privacy, revocability and authentication, and can be practically deployed in existing networks. Leveraging network cookies and user preferences, I describe a user-focused alternative for the net neutrality debate: Enable fast or zero-rated lanes and allow users to decide which traffic goes over them. Through user studies for zero rating and fast lanes, I demonstrate that user preferences are heavy tailed, and users are willing to express their preferences if there is a simple way to do so. Network cookies allow users to express high level preferences (e.g., prioritize a website or a mobile application) with high accuracy in the presence of encryption and middleboxes. I validate the approach through the design and prototype implementation of Boost, a user-driven fast lane deployed in 160 home networks. The last part of the thesis extends the concept of personalization by enabling a fully personalized network experience: Users define their network properties once (i.e., their WiFi SSID and password, devices, and policies such as QoS and firewalls) and their network follows them wherever they go---at workplace, public hotspots, or a friend's home. Personal WiFi networks provide users with a simplified and consistent experience regardless of how they are connected to the network. I describe the design and prototype implementation of BeHop, a personalized WiFi infrastructure deployed in a student dorm at Stanford University.
Book
1 online resource.
Computer networks have evolved tremendously over the past fifty years, evolving from their humble beginnings as interconnections for only several dozen academic hosts in the 1970s to vast hundred million dollar pieces of infrastructure supporting commerce and communication and communication services fundamental to our modern way of life. A major goal of the networking community is to build services, tools, and systems to manage these large, important networks efficiently while enforcing target policies — e.g., filtering packets from black-listed sources or preventing congestion on target links. Software defined networking (SDN) has been a transformative architecture for building management services that ensure such policies. We propose and implement a new transactional execution model for software defined networks. This execution model explicitly guarantees that either all management commands execute and their changes take effect or none do, resolving serious bugs discovered in existing software defined networking systems. Using fine-grained locking, a split execution model, and a sharded architecture, a transactional execution model can scale to provide tens of thousands of transactions per second. Finally, we show that a transactional model provides the versatility to express almost all network policies achievable in non-transactional systems, and allows new classes of debugging applications for software defined networks.
Special Collections
Book
1 online resource.
Today's networks require much human intervention to keep them working. Every day network engineers wrestle with router misconfigurations, fiber cuts, faulty interfaces, mislabeled cables, software bugs, intermittent links and a myriad other issues that cause networks to misbehave, or fail completely. Network engineers hunt down bugs using the most rudimentary tools and track down root causes using a combination of accrued wisdom and intuition.We found that many network problems are associated with data plane behaviors, i.e., how the network transports data plane packets. This dissertation discusses the design and implementation of automatic data plane testing tools under various network scenarios. We first present Automatic Test Packet Generation (ATPG), a foundation framework of data plane testing when all data plane information is available and accurate. ATPG reads router configurations and generates a device-independent model. The model is used to generate a minimum set of test packets to (minimally) exercise every link in the network or (maximally) exercise every rule in the network. Test packets are sent periodically, and detected failures trigger a separate mechanism to localize the fault. NetSonar extends ATPG by allowing incomplete or inaccurate data plane information as inputs. Earlier test techniques were either white box (assumes complete forwarding knowledge) or black box (assumes no knowledge). We argue that the former is infeasible in large networks, and the latter is inefficient and incomplete. NetSonar is the first graybox tester for networks and the first tester deployed in a production network. NetSonar uses only coarse forwarding information and does not require knowledge of load balancing hash functions. Finally, we move our focus to data center networks with thousands of switches and millions of forwarding table entries. Data center owners use static analysis tools that examine the topology and forwarding tables to check for loops, black-holes and reachability failures. However, the existing tools do not scale to a large data center network. Moreover, no existing tool addresses the problem of potential false positives when analyzing a network snapshot, when the network state is constantly in flux. We present Libra, a tool for verifying forwarding tables in large data center networks that simplifies ATPG's data plane model to significantly improve verification performance.
Special Collections
Book
1 online resource.
During peak viewing time, well over 50% of US Internet traffic is streamed video from Netflix and YouTube. To provide a better streaming experience, these services adapt their video rates by observing and estimating the available capacity. However, accurate capacity estimation is difficult due to highly variable throughput and complex interactions between layers. As a result, existing rate adaptation algorithms often lead to suboptimal video quality and unnecessary rebuffers. This thesis proposes an alternative buffer-based approach to adapt video rate. Rather than presuming that capacity estimation is always required, this approach starts the design by only using the playback buffer occupancy, and then ask when capacity estimation can be helpful. This design process leads to two separate phases of operation: during the steady-state phase, when the buffer encodes adequate information, we choose the video rate based only on the playback buffer; during the startup phase, when the buffer contains little information, we augment the buffer-based design with capacity estimation. This approach is tested with a series of field experiments spanning millions of Netflix users from May to September, 2013. The results demonstrate that although a simple capacity estimation is important during the startup phase, it is unnecessary in the steady state. The buffer-based approach allows us to reduce the rebuffer rate by 10-20% compared to a commercial algorithm used in Netflix, while delivering a similar overall average video rate and a higher video rate in steady state.
Special Collections
Book
1 online resource.
The network constitutes a significant portion of a datacenter's cost and its performance is critical to scaling datacenter applications. Datacenter operators thus strive to efficiently utilize network resources, while providing high, predictable network performance for applications. Recent trends in large-scale data storage and processing result in increasingly more data replicated within a datacenter for parallel access and fault tolerance. State-of-the-art systems employ unicast-based replication, which uses network bandwidth inefficiently. IP multicast enables data replication within the network at line rate. However, IP multicast is not congestion controlled, and therefore is incompatible with the dominant datacenter transport, TCP. In this thesis, I investigate an enhancement to TCP --- Congestion-controlled Single-source Multicast Optimization (TCP-COSMO). TCP-COSMO adds support for multicast transmissions to TCP, while congestion controlling multicast flows at high line rate (10Gbps). I show that, with multicast replication, one can scale distributed storage system read/write rates linearly with offered load, and reduce tail write latency by up to two orders of magnitude compared to to existing unicast-based replication schemes. I extend TCP-COSMO to support queue-aware congestion control algorithms like Datacenter TCP. This allows running throughput-oriented multicast flows without degrading the performance of concurrent, short-lived latency-sensitive flows. I further present PredNet, which integrates multicast into systems that provide bandwidth guarantees in datacenters. With bandwidth guarantees, I demonstrate predictable multicast transfer times and rapid convergence of bandwidth shares among competing sets of flows. Furthermore, I study multipath multicast forwarding for leaf-spine datacenter networks. I show that multipath forwarding enables multicast packet replication in the spine layer without causing extra congestion. I also leverage the network topology to scale the effective capacity of limited hardware multicast forwarding state. I show that distributed storage systems can scale to tens of thousands of servers participating in multicast write replication without exhausting multicast forwarding state in switches.
Special Collections
Book
1 online resource.
Networking researchers and practitioners strive for flexible network control to rapidly introduce new functionality into production networks. In this dissertation, we present a practical approach to rapidly introduce new dataplane functionality into privately owned networks such as data centers, in which both end-hosts and the network are under one administrative entity. At a high level, our approach partitions the functionality between end-hosts and the network. This separation of concerns between end-hosts and the network is useful in practice: It decouples network function extensibility from network hardware extensibility. End-hosts can deploy new network functions without having to wait for new network hardware. In the first part of the dissertation, we present a new interface between end-hosts and the network. End-hosts embed tiny programs into packets that actively query and manip- ulate a network's internal state, and execute directly in the dataplane. We then show how this single "tiny packet program" (TPP) interface gives end-hosts unprecedented visibility into network state, enabling them to work with the network with low-latency and at high throughput to realize many desirable dataplane tasks such as congestion control, measure- ment, and troubleshooting. This design requires changes both at network switches and at end-hosts, but leverages what each component does best: (a) the ASIC forwards packets at line rate, while executing a tiny program embedded within the packet (if any); and (b) end- hosts send a sequence of TPPs to query network state, and perform arbitary computation on the queried state in software. We then show that the TPP design is feasible in hardware, at a reasonable cost. We also show that TPPs are useful by implementing three different research proposals. We also outline seven other research projects that can be implemented using TPPs, highlighting that TPPs are generic, and present an architecture in which they can be made secure. In the second and third parts of this dissertation, we show how limited visibility at the access links between the network and end-hosts is sufficient to implement a quality of ser- vice network function completely at end-hosts, without requiring any changes to network switches.
Special Collections
Book
1 online resource.
In many engineering disciplines, such as digital design or software engineering, there is an abundance of theoretical foundations and practical tools for verification and debugging. In sharp contrast, the field of networking mostly relies on rudimentary tools such as ping and traceroute, together with the accrued wisdom and intuition of network administrators, for verification and debugging of networks. The verification and debugging of networks is difficult because (1) the forwarding state—the set of rules that determines how an incoming packet is processed and forwarded by network boxes—is distributed across multiple boxes, expressed in vendor dependent command line interface (CLI) formats, and is defined by the forwarding tables, filter rules, and other configuration parameters. As a result it is hard to observe and analyze the forwarding state and understand the overall system behavior. (2) The forwarding state is written by multiple independent programs, protocols, and humans. This may result in complex and unpredictable interactions between these independently generated forwarding states. Therefore, the first step in making tools for network verification and debugging is to create a simple model for the forwarding functionality of the network that abstracts away the complexities of understanding the forwarding state. One observation is that packet headers, despite carrying multiple protocols, are just sequences of bits, and networking boxes, despite all their complexities, simply rewrite and forward packet headers. Therefore, in our analytical framework, called the Header Space Analysis (HSA), a packet header is viewed as a flat sequence of bits and is modeled as a point in a {0,1}^L space, called the Header Space, where L is the length of the header. Each dimension in the header space corresponds to one bit in the packet header. Also, networking boxes are modeled as Transfer Functions, transforming packets from one point in the header space to another point or set of points. This easy-to-use formalism abstracts away the complexity of the protocols and vendor-specific semantics of network boxes and gives us a model to analytically prove properties about networks that are otherwise hard to ensure. In this dissertation, after introducing the header space analysis, I will describe three set of techniques and tools for network verification and testing based on HSA: 1) Hassel, a static and offline verification tool for verifying properties such as reachability of end hosts and finding loops and black holes 2) NetPlumber, a real time policy and invariant checking tool for networks and 3) Automatic Test Packet Generation (ATPG) framework for maximally testing network with the optimal number of test packets.
Special Collections
Book
1 online resource.
Computing systems account for at least 13% of the electricity use of office buildings. This translates to about 2% of the electricity consumption of the entire US or the equivalent of the State of New Jersey! As computing becomes pervasive, making these systems more efficient is an opportunity to reduce operational costs and have a positive environmental impact. Unfortunately, current understanding of energy consumption in office buildings is limited and coarse-grained. Without better visibility into how electricity is spent and how much of it is wasted, it is difficult to find ways to reduce it. Powernet -- a multi-year power and utilization study of the computing infrastructure in the Computer Science Department at Stanford University -- begins to address the visibility problem in one building. Powernet's data is collected via a large network of plug-level wireless power meters and software sensors that cover a significant portion of the 2nd, 3rd, and 4th floors of the Gates building at Stanford. The Powernet data show that at least 25% of Gates's electricity is wasted on idle and over-provisioned devices. At the extreme, many desktops operate at near-idle for 75% of the time. The combination of high idle power and low utilization means that a large chunk of energy is wasted. This highlights an opportunity to improve on current computing systems. This dissertation presents a novel system architecture for office computing, Any- ware. To save energy, Anyware leverages two observations. First, an increase in energy use does not translate to the same increase in performance. Second, there is a range of resources one can have for a fixed power budget. Anyware's hybrid design splits workload execution between a local low-power client device and a virtual machine (VM) on a backend server. Applications that benefit from hardware optimizations, such as video and graphics, remain local; other tasks (document and picture editing, PDF viewing, etc.) are offloaded to the server. Anyware reduces the energy cost of computing by 70%--80% because the client has power draw comparable to that of a thin client or a laptop (15 to 20 watts) while the server can host multiple user VMs. Fast I/O, the availability of network resources in a LAN environment, and the increased CPU and memory on the server mean that users can get comparable performance at the fraction of the energy cost. Anyware demonstrates that with a new computing architecture, it is possible to have the best of two worlds: desktop performance at the energy costs of thin clients.
Special Collections

Articles+

Journal articles, e-books, & other e-resources
Articles+ results include