lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 17 Sep 2014 16:55:26 +0200
From:	Jesper Dangaard Brouer <jbrouer@...hat.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	Tom Herbert <therbert@...gle.com>
Subject: Re: CPU scheduler to TXQ binding? (ixgbe vs. igb)

On Wed, 17 Sep 2014 07:32:39 -0700
Eric Dumazet <eric.dumazet@...il.com> wrote:

> On Wed, 2014-09-17 at 15:26 +0200, Jesper Dangaard Brouer wrote:
> > The CPU to TXQ binding behavior of ixgbe vs. igb NIC driver are
> > somehow different.  Normally I setup NIC IRQ-to-CPU bindings 1-to-1,
> > with script set_irq_affinity [1].
> > 
> > For forcing use of a specific HW TXQ, I normally force the CPU binding
> > of the process, either with "taskset" or with "netperf -T lcpu,rcpu".
> > 
> > This works fine with driver ixgbe, but not with driver igb.  That is
> > with igb, the program forced to specific CPU, can still use another
> > TXQ. What am I missing?
> > 
> > 
> > I'm monitoring this with both:
> >  1) watch -d sudo tc -s -d q ls dev ethXX
> >  2) https://github.com/ffainelli/bqlmon
> > 
> > [1] https://github.com/netoptimizer/network-testing/blob/master/bin/set_irq_affinity
> 
> Have you setup XPS ?
> 
> echo 0001 >/sys/class/net/ethX/queues/tx-0/xps_cpus
> echo 0002 >/sys/class/net/ethX/queues/tx-1/xps_cpus
> echo 0004 >/sys/class/net/ethX/queues/tx-2/xps_cpus
> echo 0008 >/sys/class/net/ethX/queues/tx-3/xps_cpus
> echo 0010 >/sys/class/net/ethX/queues/tx-4/xps_cpus
> echo 0020 >/sys/class/net/ethX/queues/tx-5/xps_cpus
> echo 0040 >/sys/class/net/ethX/queues/tx-6/xps_cpus
> echo 0080 >/sys/class/net/ethX/queues/tx-7/xps_cpus
> 
> Or something like that, depending on number of cpus and TX queues.

Thanks, that worked! They were all default set to "000" for igb, but
set correctly/like-above for ixgbe (strange).

Did:

$ export DEV=eth1 ; export NR_CPUS=11 ; \
 for txq in `seq 0 $NR_CPUS` ; do \
   file=/sys/class/net/${DEV}/queues/tx-${txq}/xps_cpus \
   mask=`printf %X $((1<<$txq))`
   test -e $file && sudo sh -c "echo $mask > $file" && \
   grep . -H $file ;\
 done
/sys/class/net/eth1/queues/tx-0/xps_cpus:001
/sys/class/net/eth1/queues/tx-1/xps_cpus:002
/sys/class/net/eth1/queues/tx-2/xps_cpus:004
/sys/class/net/eth1/queues/tx-3/xps_cpus:008
/sys/class/net/eth1/queues/tx-4/xps_cpus:010
/sys/class/net/eth1/queues/tx-5/xps_cpus:020
/sys/class/net/eth1/queues/tx-6/xps_cpus:040
/sys/class/net/eth1/queues/tx-7/xps_cpus:080
 
-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Sr. Network Kernel Developer at Red Hat
  Author of http://www.iptv-analyzer.org
  LinkedIn: http://www.linkedin.com/in/brouer
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ