[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1349716215.21172.3484.camel@edumazet-glaptop>
Date: Mon, 08 Oct 2012 19:10:15 +0200
From: Eric Dumazet <eric.dumazet@...il.com>
To: Rick Jones <rick.jones2@...com>
Cc: Herbert Xu <herbert@...dor.apana.org.au>,
David Miller <davem@...emloft.net>,
netdev <netdev@...r.kernel.org>, Jesse Gross <jesse@...ira.com>,
Tom Herbert <therbert@...gle.com>,
Yuchung Cheng <ycheng@...gle.com>
Subject: Re: [PATCH] net: gro: selective flush of packets
On Mon, 2012-10-08 at 09:42 -0700, Rick Jones wrote:
> > By the way, one of the beauty of GRO is it helps under load to aggregate
> > packets and reduce cpu load. People wanting very low latencies should
> > probably not use GRO, and if they use it or not, receiving a full 64
> > packets batch on a particular NIC makes latencies very unpredictable.
> >
> > So if we consumed all budget in a napi->poll() handler, its because we
> > are under load and we dont really want to cancel GRO aggregation.
>
> Is that actually absolute, or does it depend on GRO aggregation actually
> aggregating? In your opening message you talked about how with though
> flows GRO is defeated but its overhead remains.
>
Sorry, I dont understand the question.
We consume all budget when 64 packets are fetched from NIC.
This has nothing to do with GRO, but NAPI behavior.
Sure, if these packets are UDP messages and cross GRO stack for nothing,
its pure overhead.
Current situation is :
You receive a burst of packets, with one (or few) TCP message(s), and
other frames are UDP only.
This TCP message is held in GRO queue, and stay here as long as we dont
receive another packet for the same flow, or the burst ends.
Note that I dont really care of these few TCP messages right now,
but when/if we use a hash table and allow XXX packets in GRO stack,
things are different ;)
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists