[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1324065042.2621.27.camel@edumazet-laptop>
Date: Fri, 16 Dec 2011 20:50:42 +0100
From: Eric Dumazet <eric.dumazet@...il.com>
To: David Decotigny <decot@...glers.com>
Cc: Matt Carlson <mcarlson@...adcom.com>,
Michael Chan <mchan@...adcom.com>, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org,
Javier Martinez Canillas <martinez.javier@...il.com>,
Robin Getz <rgetz@...ckfin.uclinux.org>,
Matt Mackall <mpm@...enic.com>,
Tom Herbert <therbert@...gle.com>
Subject: Re: [PATCH net-next v1 5/6] tg3: implementation of a non-NAPI mode
Le vendredi 16 décembre 2011 à 10:19 -0800, David Decotigny a écrit :
> From: Tom Herbert <therbert@...gle.com>
>
> The tg3 NIC has a hard limit of 511 descriptors for the receive ring.
> Under heavy load of small packets, this device receive queue may not
> be serviced fast enough to prevent packets drops. This could be due
> to a variety of reasons such as lengthy processing delays of packets
> in the stack, softirqs being disabled too long, etc. If the driver is
> run in non-NAPI mode, the RX queue is serviced in the device
> interrupt, which is much less likely to be deferred for a substantial
> period of time.
>
> There are some effects in not using NAPI that need to be considered.
> It does increase the chance of live-lock in interrupt handler,
> although since the tg3 does interrupt coalescing this is very unlikely
> to occur. Also, more code is being run with interrupts disabled
> potentially deferring other hardware interrupts. The amount of time
> spent in the interrupt handler should be minimized by dequeuing
> packets of the device queue and queuing them to a host queue as
> quickly as possible.
>
> The default mode of operation remains NAPI and its performances are
> kept unchanged (code unchanged). Non-NAPI mode is enabled by
> commenting-out CONFIG_TIGON3_NAPI Kconfig parameter.
>
>
Oh well, thats ugly :(
I suspect this was only used with RPS/RFS ?
Or interrupts stick on a given cpu ?
Because with a default setup, and IRQ serviced by multiple cpus, you
endup with possible packet reorderings.
Packet1,2,3,4 handled by CPU0 : queued on netif_rx() queue.
EndOfInterrupt
Packet4,5,6,7 handled by CPU1 : queued on netif_rx() queue.
EndOfInterrupt
CPU0/CPU1 happily merge packets...
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists