[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <063D6719AE5E284EB5DD2968C1650D6D17246334@AcuExch.aculab.com>
Date: Thu, 15 May 2014 09:16:57 +0000
From: David Laight <David.Laight@...LAB.COM>
To: 'Jesper Dangaard Brouer' <brouer@...hat.com>,
David Miller <davem@...emloft.net>
CC: "alexander.h.duyck@...el.com" <alexander.h.duyck@...el.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"jeffrey.t.kirsher@...el.com" <jeffrey.t.kirsher@...el.com>,
"dborkman@...hat.com" <dborkman@...hat.com>,
"fw@...len.de" <fw@...len.de>,
"shemminger@...tta.com" <shemminger@...tta.com>,
"paulmck@...ux.vnet.ibm.com" <paulmck@...ux.vnet.ibm.com>,
"robert@...julf.se" <robert@...julf.se>,
"greearb@...delatech.com" <greearb@...delatech.com>,
"john.r.fastabend@...el.com" <john.r.fastabend@...el.com>,
"danieltt@....se" <danieltt@....se>,
"zhouzhouyi@...il.com" <zhouzhouyi@...il.com>
Subject: RE: [net-next PATCH 2/5] ixgbe: increase default TX ring buffer to
1024
From: Jesper Dangaard Brouer
> On Wed, 14 May 2014 13:49:50 -0400 (EDT)
> David Miller <davem@...emloft.net> wrote:
>
> > From: Alexander Duyck <alexander.h.duyck@...el.com>
> > Date: Wed, 14 May 2014 09:28:50 -0700
> >
> > > I'd say that it might be better to just add a note to the documentation
> > > folder indicating what configuration is optimal for pktgen rather then
> > > changing everyone's defaults to support one specific test.
> >
> > We could have drivers provide a pktgen config adjustment mechanism,
> > so if someone starts pktgen then the device auto-adjusts to a pktgen
> > optimal configuration (whatever that may entail).
>
> That might be problematic because changing the TX queue size cause the
> ixgbe driver to reset the link.
>
> Notice that pktgen is ignoring BQL. I'm sort of hoping that BQL will
> push back for real use-cases, to avoid the bad effects of increasing
> the TX size.
>
> One of the bad effects, I'm hoping BQL will mitigate, is the case of
> filling the TX queue with large frames. Consider 9K jumbo frames, how
> long time will it take to empty 1024 jumbo frames on a 10G link:
>
> (9000*8)/(10000*10^6)*1000*1024 = 7.37ms
Never mind 9k 'jumbo' frames, I'm pretty sure ixgbe supports TCP segmentation
offload - so you can have 64k frames in the tx ring.
Since each takes (about) 44 ethernet frames that is about 55ms
to clear the queue.
David
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists