lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 14 May 2014 09:28:50 -0700
From:	Alexander Duyck <alexander.h.duyck@...el.com>
To:	Jesper Dangaard Brouer <brouer@...hat.com>, netdev@...r.kernel.org
CC:	Jeff Kirsher <jeffrey.t.kirsher@...el.com>,
	Daniel Borkmann <dborkman@...hat.com>,
	Florian Westphal <fw@...len.de>,
	"David S. Miller" <davem@...emloft.net>,
	Stephen Hemminger <shemminger@...tta.com>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Robert Olsson <robert@...julf.se>,
	Ben Greear <greearb@...delatech.com>,
	John Fastabend <john.r.fastabend@...el.com>, danieltt@....se,
	zhouzhouyi@...il.com
Subject: Re: [net-next PATCH 2/5] ixgbe: increase default TX ring buffer to
 1024

On 05/14/2014 07:17 AM, Jesper Dangaard Brouer wrote:
> Using pktgen I'm seeing the ixgbe driver "push-back", due TX ring
> running full.  Thus, the TX ring is artificially limiting pktgen.
> 
> Diagnose via "ethtool -S", look for "tx_restart_queue" or "tx_busy"
> counters.
> 
> Increasing the TX ring buffer should be done carefully, as it comes at
> a higher memory cost, which can also negatively influence performance.
> E.g. ring buffer array of struct ixgbe_tx_buffer (current size 48bytes)
> increase from 512*48=24576bytes to 1024*48=49152bytes which is larger
> than the L1 data cache (32KB on my E5-2630), thus increasing the L1->L2
> cache-references.
> 
> Adjusting the TX ring buffer (TXSZ) measured over 10 sec with ifpps
>  (single CPU performance, ixgbe 10Gbit/s, E5-2630)
>  * cmd: ethtool -G eth8 tx $TXSZ
>  * 3,930,065 pps -- TXSZ= 512
>  * 5,312,249 pps -- TXSZ= 768
>  * 5,362,722 pps -- TXSZ=1024
>  * 5,361,390 pps -- TXSZ=1536
>  * 5,362,439 pps -- TXSZ=2048
>  * 5,359,744 pps -- TXSZ=4096
> 
> Choosing size 1024 because for the next optimizations 768 is not
> enough.
> 
> Notice after commit 6f25cd47d (pktgen: fix xmit test for BQL enabled
> devices) pktgen uses netif_xmit_frozen_or_drv_stopped() and ignores
> the BQL "stack" pause (QUEUE_STATE_STACK_XOFF) flag.  This allow us to put
> more pressure on the TX ring buffers.
> 
> It is the ixgbe_maybe_stop_tx() call that stops the transmits, and
> pktgen respecting this in the call to netif_xmit_frozen_or_drv_stopped(txq).
> 
> Signed-off-by: Jesper Dangaard Brouer <brouer@...hat.com>
> ---
> 
>  drivers/net/ethernet/intel/ixgbe/ixgbe.h |    2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
> 
> diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe.h b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
> index c688c8a..bf078fe 100644
> --- a/drivers/net/ethernet/intel/ixgbe/ixgbe.h
> +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
> @@ -63,7 +63,7 @@
>  #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
>  
>  /* TX/RX descriptor defines */
> -#define IXGBE_DEFAULT_TXD		    512
> +#define IXGBE_DEFAULT_TXD		   1024
>  #define IXGBE_DEFAULT_TX_WORK		    256
>  #define IXGBE_MAX_TXD			   4096
>  #define IXGBE_MIN_TXD			     64
> 

What is the point of optimizing ixgbe for a synthetic benchmark?  In my
experience the full stack can only handle about 2Mpps at 60B packets
with a single queue.  Updating the defaults for a pktgen test seems
unrealistic as that isn't really a standard use case for the driver.

I'd say that it might be better to just add a note to the documentation
folder indicating what configuration is optimal for pktgen rather then
changing everyone's defaults to support one specific test.

Thanks,

Alex
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ