ELIMINATING RECEIVE LIVELOCK IN AN INTERRUPT DRIVEN KERNEL PDF

The click modular router by Eddie Kohler , " Click is a new software architecture for building flexible and configurable routers. A Click router is assembled from packet processing modules called elements. Individual elements implement simple router functions like packet classification, queueing, scheduling, and interfacing with network devic

Author:Vudolrajas Shakagul
Country:Guatemala
Language:English (Spanish)
Genre:Automotive
Published (Last):12 March 2014
Pages:346
PDF File Size:4.49 Mb
ePub File Size:19.59 Mb
ISBN:164-4-83225-798-1
Downloads:78777
Price:Free* [*Free Regsitration Required]
Uploader:Arashijas



Phone: 2. FAX: 3. Email: office usenix. Interrupt-driven systems can provide low overhead and good latency at low offered load, but degrade significantly at higher arrival rates unless care is taken to prevent several pathologies. These are various forms of receive livelock, in which the system spends all its time processing interrupts, to the exclusion of other necessary tasks.

Under extreme conditions, no packets are delivered to the user application or the output of the system. To avoid livelock and related problems, an operating system must schedule network interrupt handling as carefully as it schedules process execution. We modified an interrupt-driven networking implementation to do so; this eliminates receive livelock without degrading other aspects of system performance.

We present measurements demonstrating the success of our approach. Interrupts are useful because they allow the CPU to spend most of its time doing useful processing, yet respond quickly to events without constantly having to poll for event arrivals. Polling can also increase the latency of response to an event.

Modern systems can respond to an interrupt in a few tens of microseconds; to achieve the same latency using polling, the system would have to poll tens of thousands of times per second, which would create excessive overhead.

For a general-purpose system, an interrupt-driven design works best. Disks tended to issue events on the order of once per revolution; first-generation LAN environments tend to generate a few hundred packets per second for any single end-system. Although people understood the need to reduce the cost of taking an interrupt, in general this cost was low enough that any normal system would spend only a fraction of its CPU time handling interrupts.

The world has changed. Multimedia and other real-time applications will become widespread. Multicast and broadcast protocols subject innocent-bystander hosts to loads that do not interest them at all. As a result, network implementations must now deal with significantly higher event rates. Many multi-media and client-server applications share another unpleasant property: unlike traditional network applications Telnet, FTP, electronic mail , they are not flow-controlled.

Some multi-media applications want constant-rate, low-latency service; RPC-based client-server applications often use datagram-style transports, instead of reliable, flowcontrolled protocols. The shift to higher event rates and non-flowcontrolled protocols can subject a host to congestive collapse: once the event rate saturates the system, without a negative feedback loop to control the sources, there is no way to gracefully shed load. If the host runs at full throughput under these conditions, and gives fair service to all sources, this at least preserves the possibility of stability.

But if throughput decreases as the offered load increases, the overall system becomes unstable. Interrupt-driven systems tend to perform badly under overload. Tasks performed at interrupt level, by definition, have absolute priority over all other tasks.

If the event rate is high enough to cause the system to spend all of its time responding to interrupts, then nothing else will happen, and the system throughput will drop to zero. We call this condition receive livelock: the system is not deadlocked, but it makes no progress on any of its tasks. Any purely interrupt-driven system using fixed interrupt priorities will suffer from receive livelock under input overload conditions.

Once the input rate exceeds the reciprocal of the CPU cost of processing one input event, any task scheduled at a lower priority will not get a chance to run. Yet we do not want to lightly discard the obvious benefits of an interrupt-driven design. In this paper, we present a number of simple modifications to the purely interrupt-driven model, and show that they guarantee throughput and improve latency under overload, while preserving the desirable qualities of an interrupt-driven system under light load.

The rest of this paper concentrates on host-based routing, since this simplifies the context of the problem and allows easy performance measurement. Requirements for scheduling network tasks Performance problems generally arise when a system is subjected to transient or long-term input overload. Ideally, the communication subsystem could handle the worst-case input load without saturating, but cost considerations often prevent us from building such powerful systems.

Systems are usually sized to support a specified design-center load, and under overload the best we can ask for is controlled and graceful degradation. When an end-system is involved in processing considerable network traffic, its performance depends critically on how its tasks are scheduled. The mechanisms and policies that schedule packet processing and other tasks should guarantee acceptable system throughput, reasonable latency and jitter variance in delay , fair allocation of resources, and overall system stability, without imposing excessive overheads, especially when the system is overloaded.

We can define throughput as the rate at which the system delivers packets to their ultimate consumers. A consumer could be an application running on the receiving host, or the host could be acting as a router and forwarding packets to consumers on other hosts. We expect the throughput of a well-designed system to keep up with the offered load up to a point called the Maximum Loss Free Receive Rate MLFRR , and at higher loads throughput should not drop below this rate.

Of course, useful throughput depends not just on successful reception of packets; the system must also transmit packets. Because packet reception and packet transmission often compete for the same resources, under input overload conditions the scheduling subsystem must ensure that packet transmission continues at an adequate rate. Many applications, such as distributed systems and interactive multimedia, often depend more on low-latency, low-jitter communications than on high throughput.

Even during overload, we want to avoid long queues, which increases latency, and bursty scheduling, which increases jitter. When a host is overloaded with incoming network packets, it must also continue to process other tasks, so as to keep the system responsive to management and control requests, and to allow applications to make use of the arriving packets. The scheduling subsystem must fairly allocate CPU resources among packet reception, packet transmission, protocol 2.

Motivating applications We were led to our investigations by a number of specific applications that can suffer from livelock. Such applications could be built on dedicated singlepurpose systems, but are often built using a generalpurpose system such as UNIX? The applications include:? Host-based routing: Although inter-network routing is traditionally done using specialpurpose usually non-interrupt-driven router systems, routing is often done using more conventional hosts.

These applications and others like them, such as Web servers are all potentially exposed to heavy, non-flow-controlled loads. A host that behaves badly when overloaded can also harm other systems on the network. Livelock in a router, for example, may cause the loss of control messages, or delay their processing. This can lead other routers to incorrectly infer link failure, causing incorrect routing information to propagate over the entire wide-area network.

Worse, loss or delay of control messages can lead to network instability, by causing positive feedback in the generation of control traffic [10]. Interrupt-driven scheduling and its consequences Scheduling policies and mechanisms significantly affect the throughput and latency of a system under overload.

In an interrupt-driven operating system, the interrupt subsystem must be viewed as a component of the scheduling system, since it has a major role in determining what code runs when. We have observed that interrupt-driven systems have trouble meeting the requirements discussed in section 3.

In this section, we first describe the characteristics of an interrupt-driven system, and then identify three kinds of problems causes by network input overload in interrupt-driven systems:? Receive livelocks under overload: delivered throughput drops to zero while the input overload persists.

Increased latency for packet delivery or forwarding: the system delays the delivery of one packet while it processes the interrupts for subsequent packets, possibly of a burst. Starvation of packet transmission: even if the CPU keeps up with the input load, strict priority assignments may prevent it from transmitting any packets.

Description of an interrupt-driven system An interrupt-driven system performs badly under network input overload because of the way in which it prioritizes the tasks executed as the result of network input.

We use the 4. When a packet arrives, the network interface signals this event by interrupting the CPU. The interrupt causes entry into the associated network device driver, which does some initial processing of the packet. The software interrupt is taken at a lower IPL, and so this protocol processing can be preempted by subsequent interrupts. We avoid lengthy periods at high IPL, to reduce latency for handling certain other events. The queues between steps executed at different IPLs provide some insulation against packet losses due to transient overloads, but typically they have fixed length limits.

When a packet should be queued but the queue is full, the system must drop the packet. The selection of proper queue limits, and thus the allocation of buffering among layers in the system, is critical to good performance, but beyond the scope of this paper.

As a consequence of this structure, a heavy load of incoming packets could generate a high rate of interrupts at device IPL. Dispatching an interrupt is a costly operation, so to avoid this overhead, the network device driver attempts to batch interrupts.

That is, if packets arrive in a burst, the interrupt handler attempts to process as many packets as possible before returning from the interrupt. This amortizes the cost of processing an interrupt over several packets. Even with batching, a system overloaded with input packets will spend most of its time in the code that runs at device IPL.

That is, the design gives absolute priority to processing incoming packets. At the time that 4. This is still a problem with low-cost interfaces. Thus, systems derived from 4. Modern network adapters can receive many backto-back packets without host intervention, either through the use of copious buffering or highly autonomous DMA engines.

This insulates the system from the network, and eliminates much of the rationale for giving absolute priority to the first few steps of processing a received packet. Receive livelock In an interrupt-driven system, receiver interrupts take priority over all other activity. If packets arrive too fast, the system will spend all of its time processing receiver interrupts. It will therefore have no resources left to support delivery of the arriving packets to applications or, in the case of a router, to forwarding and transmitting these packets.

The useful throughput of the system will drop to zero. Following [11], we refer to this condition as receive livelock: a state of the system where no useful progress is being made, because some necessary resource is entirely consumed with processing receiver interrupts.

When the input load drops sufficiently, the system leaves this state, and is again able to make forward progress. This is not a deadlock state, from which the system would not recover even when the input rate drops to zero. A system could behave in one of three ways as the input load increases. In an ideal system, the delivered throughput always matches the offered load. At loads above the MLFRR, the system is still making progress, but it is dropping some of the offered input; typically, packets are dropped at a queue between processing steps that occur at different priorities.

In a system prone to receive livelock, however, throughput decreases with increasing offered load, for input rates above the MLFRR. Receive livelock occurs at the point where the throughput falls to zero.

A livelocked system wastes all of the effort it puts into partially processing received packets, since they are all discarded. Receiver-interrupt batching complicates the situation slightly. Batching can shift the livelock point but cannot, by itself, prevent livelock. In section 6.

GIOVANNI REALE GUIA DE LECTURA DE LA METAFISICA PDF

ELIMINATING RECEIVE LIVELOCK IN AN INTERRUPT DRIVEN KERNEL PDF

Kajirr The quotas used are 1, 2, 3, 4 packets per poll events. A consumer could be an application running on the receiving network end system, or the network end system could be acting as a router and forwarding packets to consumers on other hosts. Ramakrishnan: Eliminating Receive Livelock in an Interrupt-Driven Kemel — Semantic Scholar Through the event-driven simulation, we showed that the polling schemes are very efficient in case of high traffic streams. A purely software-based implementation of the receiving traffic distribution, known as receive packet steering RPSdistributes received traffic among cores later in the data path, as part of the interrupt handler functionality. The studied mechanisms are evaluated and compared using a discrete event simulation under high traffic load.

LORD OF THE LOGOS CHRISTOPHE SZPAJDEL PDF

Phone: 2. FAX: 3. Email: office usenix. Interrupt-driven systems can provide low overhead and good latency at low offered load, but degrade significantly at higher arrival rates unless care is taken to prevent several pathologies. These are various forms of receive livelock, in which the system spends all its time processing interrupts, to the exclusion of other necessary tasks.

HCPL 0500 PDF

Eliminating Receive Livelock in an Interrupt-Driven Kernel summary Livelock is a phenomenon such as when two processes tries to take resource however there is a collision, each process tries to take another resource and collides again, the two resources repeats this behavior repeatedly, as a result any process can not have resource even though resources are available. In operating system, interrupt driven system can provide high performance at low load. However when the load is high, interrupt system might result livelock. This paper suggests a set of scheduling improvements to solve the livelock problem and starvation under UNIX network system. The main idea of this paper is utilizing polling. Since interrupt is effective when the load is small, and polling is effective when the load is high, it is preferable to use both mechanism dynamically.

JZYK ANSI C KERNIGHAN RITCHIE PDF

.

Related Articles