[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20121006051407.GA27390@gondor.apana.org.au>
Date: Sat, 6 Oct 2012 13:14:07 +0800
From: Herbert Xu <herbert@...dor.apana.org.au>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: David Miller <davem@...emloft.net>,
netdev <netdev@...r.kernel.org>, Jesse Gross <jesse@...ira.com>
Subject: Re: [RFC] GRO scalability
On Sat, Oct 06, 2012 at 07:08:46AM +0200, Eric Dumazet wrote:
> Le samedi 06 octobre 2012 à 12:11 +0800, Herbert Xu a écrit :
> > On Fri, Oct 05, 2012 at 04:52:27PM +0200, Eric Dumazet wrote:
> > > Current GRO cell is somewhat limited :
> > >
> > > - It uses a single list (napi->gro_list) of pending skbs
> > >
> > > - This list has a limit of 8 skbs (MAX_GRO_SKBS)
> > >
> > > - Workloads with lot of concurrent flows have small GRO hit rate but
> > > pay high overhead (in inet_gro_receive())
> > >
> > > - Increasing MAX_GRO_SKBS is not an option, because GRO
> > > overhead becomes too high.
> >
> > Yeah these were all meant to be addressed at some point.
> >
> > > - Packets can stay a long time held in GRO cell (there is
> > > no flush if napi never completes on a stressed cpu)
> >
> > This should never happen though. NAPI runs must always be
> > punctuated just to guarantee one card never hogs a CPU. Which
> > driver causes these behaviour?
>
> I believe its a generic issue, not specific to a driver.
>
> napi_gro_flush() is only called from napi_complete()
>
> Some drivers (marvell/skge.c & realtek/8139cp.c) calls it only because
> they 'inline' napi_complete()
So which driver has the potential of never doing napi_gro_flush?
Cheers,
--
Email: Herbert Xu <herbert@...dor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists