[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1349500126.4883.4.camel@edumazet-laptop>
Date: Sat, 06 Oct 2012 07:08:46 +0200
From: Eric Dumazet <eric.dumazet@...il.com>
To: Herbert Xu <herbert@...dor.apana.org.au>
Cc: David Miller <davem@...emloft.net>,
netdev <netdev@...r.kernel.org>, Jesse Gross <jesse@...ira.com>
Subject: Re: [RFC] GRO scalability
Le samedi 06 octobre 2012 à 12:11 +0800, Herbert Xu a écrit :
> On Fri, Oct 05, 2012 at 04:52:27PM +0200, Eric Dumazet wrote:
> > Current GRO cell is somewhat limited :
> >
> > - It uses a single list (napi->gro_list) of pending skbs
> >
> > - This list has a limit of 8 skbs (MAX_GRO_SKBS)
> >
> > - Workloads with lot of concurrent flows have small GRO hit rate but
> > pay high overhead (in inet_gro_receive())
> >
> > - Increasing MAX_GRO_SKBS is not an option, because GRO
> > overhead becomes too high.
>
> Yeah these were all meant to be addressed at some point.
>
> > - Packets can stay a long time held in GRO cell (there is
> > no flush if napi never completes on a stressed cpu)
>
> This should never happen though. NAPI runs must always be
> punctuated just to guarantee one card never hogs a CPU. Which
> driver causes these behaviour?
I believe its a generic issue, not specific to a driver.
napi_gro_flush() is only called from napi_complete()
Some drivers (marvell/skge.c & realtek/8139cp.c) calls it only because
they 'inline' napi_complete()
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists