[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <4E946DBF.5050105@genband.com>
Date: Tue, 11 Oct 2011 10:24:31 -0600
From: Chris Friesen <chris.friesen@...band.com>
To: Eric Dumazet <eric.dumazet@...il.com>
CC: starlight@...nacle.cx, linux-kernel@...r.kernel.org,
netdev <netdev@...r.kernel.org>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Christoph Lameter <cl@...two.org>, Willy Tarreau <w@....eu>,
Ingo Molnar <mingo@...e.hu>,
Stephen Hemminger <stephen.hemminger@...tta.com>,
Benjamin LaHaise <bcrl@...ck.org>,
Joe Perches <joe@...ches.com>,
Chetan Loke <Chetan.Loke@...scout.com>,
Con Kolivas <conman@...ivas.org>,
Serge Belyshev <belyshev@...ni.sinp.msu.ru>
Subject: Re: big picture UDP/IP performance question re 2.6.18 -> 2.6.32
On 10/06/2011 11:40 PM, Eric Dumazet wrote:
> Le jeudi 06 octobre 2011 à 23:27 -0400, starlight@...nacle.cx a écrit :
>> If the older kernels are switching to NAPI
>> for much of surge and the switching out
>> once the pulse falls off, it might
>> conceivably result in much better latency
>> and overall performance.
> Thats exactly the opposite : Your old kernel is not fast enough to
> enter/exit NAPI on every incoming frame.
>
> Instead of one IRQ per incoming frame, you have less interrupts :
> A napi run processes more than 1 frame.
>
> Now increase your incoming rate, and you'll discover a new kernel will
> be able to process more frames without losses.
I wonder if it would make sense to adjust the interrupt mitigation
parameters in the NIC to allow it to accumulate a few packets before
interrupting the CPU. We had good luck using this to reduce interrupt
rate on a quasi-pathological case where we were bouncing in and out of
NAPI because we were *just* fast enough to keep up with incoming packets.
Chris
--
Chris Friesen
Software Developer
GENBAND
chris.friesen@...band.com
www.genband.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists