[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <479529DF.5030707@nortel.com>
Date: Mon, 21 Jan 2008 17:25:19 -0600
From: "Chris Friesen" <cfriesen@...tel.com>
To: Eric Dumazet <dada1@...mosbay.com>
CC: netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: questions on NAPI processing latency and dropped network packets
Eric Dumazet wrote:
> Chris Friesen a écrit :
>
>> I've done some further digging, and it appears that one of the
>> problems we may be facing is very high instantaneous traffic rates.
>>
>> Instrumentation showed up to 222K packets/sec for short periods (at
>> least 1.1 ms, possibly longer), although the long-term average is down
>> around 14-16K packets/sec.
>
>
> Instrumentation done where exactly ?
I added some code to e1000_clean_rx_irq() to track rx_fifo drops, total
packets received, and an accurate timestamp.
If rx_fifo errors changed, it would dump the information.
>> Is there anything else we can do to minimize the latency of network
>> packet processing and avoid having to crank the rx ring size up so high?
> You have some tasks that disable softirqs too long. Sometimes, bumping
> RX ring size is OK (but you will still have delays), sometimes it is not
> an option, since 4096 is the limit on current hardware.
I added some instrumentation to take timestamps in __do_softirq() as
well. Based on these timestamps, I can see the following code sequence:
2374604616 usec, start processing softirqs in __do_softirq()
2374610337 usec, log values in e1000_clean_rx_irq()
2374611411 usec, log values in e1000_clean_rx_irq()
In between the successive calls to e1000_clean_rx_irq() the rx_fifo
counts went up.
Does anyone have any patchsets to track down what softirqs are taking a
long time, and/or who's disabling softirqs?
Chris
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists