[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140514210935.5fc80c79@redhat.com>
Date: Wed, 14 May 2014 21:09:35 +0200
From: Jesper Dangaard Brouer <brouer@...hat.com>
To: David Miller <davem@...emloft.net>
Cc: alexander.h.duyck@...el.com, netdev@...r.kernel.org,
jeffrey.t.kirsher@...el.com, dborkman@...hat.com, fw@...len.de,
shemminger@...tta.com, paulmck@...ux.vnet.ibm.com,
robert@...julf.se, greearb@...delatech.com,
john.r.fastabend@...el.com, danieltt@....se, zhouzhouyi@...il.com,
brouer@...hat.com
Subject: Re: [net-next PATCH 2/5] ixgbe: increase default TX ring buffer to
1024
On Wed, 14 May 2014 13:49:50 -0400 (EDT)
David Miller <davem@...emloft.net> wrote:
> From: Alexander Duyck <alexander.h.duyck@...el.com>
> Date: Wed, 14 May 2014 09:28:50 -0700
>
> > I'd say that it might be better to just add a note to the documentation
> > folder indicating what configuration is optimal for pktgen rather then
> > changing everyone's defaults to support one specific test.
>
> We could have drivers provide a pktgen config adjustment mechanism,
> so if someone starts pktgen then the device auto-adjusts to a pktgen
> optimal configuration (whatever that may entail).
That might be problematic because changing the TX queue size cause the
ixgbe driver to reset the link.
Notice that pktgen is ignoring BQL. I'm sort of hoping that BQL will
push back for real use-cases, to avoid the bad effects of increasing
the TX size.
One of the bad effects, I'm hoping BQL will mitigate, is the case of
filling the TX queue with large frames. Consider 9K jumbo frames, how
long time will it take to empty 1024 jumbo frames on a 10G link:
(9000*8)/(10000*10^6)*1000*1024 = 7.37ms
But with 9K MTU and 512, we already have:
(9000*8)/(10000*10^6)*1000*512 = 3.69ms
Guess the more normal use-case would be 1500+38 (Ethernet overhead)
(1538*8)/(10000*10^6)*1000*1024 = 1.25ms
And then again, these calculations should then in theory be multiplied
with the number of TX queues.
I know, increasing these limits should not be taken lightly, but we
just have to be crystal clear that the current 512 limit, is
artificially limiting the capabilities of your hardware.
We can postpone this increase, because I also observe a 2Mpps limit
when actually using (alloc/free) real SKBs. The alloc/free cost is
currently just hiding this limitation.
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Sr. Network Kernel Developer at Red Hat
Author of http://www.iptv-analyzer.org
LinkedIn: http://www.linkedin.com/in/brouer
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists