lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <1473811749.18970.259.camel@edumazet-glaptop3.roam.corp.google.com>
Date:   Tue, 13 Sep 2016 17:09:09 -0700
From:   Eric Dumazet <eric.dumazet@...il.com>
To:     Michael Ma <make0818@...il.com>
Cc:     netdev <netdev@...r.kernel.org>
Subject: Re: Modification to skb->queue_mapping affecting performance

On Tue, 2016-09-13 at 16:30 -0700, Michael Ma wrote:

> The RX queue number I found from "ls /sys/class/net/eth0/queues" is
> 64. (is this the correct way of identifying the queue number on NIC?)
> I setup ifb with 24 queues which is equal to the TX queue number of
> eth0 and also the number of CPU cores.

Please do not drop netdev@ from this mail exchange.

ethtool -l eth0

> 
> > There is no qdisc lock contention anymore AFAIK, since each cpu will use
> > a dedicate IFB queue and tasklet.
> >
> How is this achieved? I thought qdisc on ifb will still be protected
> by the qdisc root lock in __dev_xmit_skb() so essentially all threads
> processing qdisc are still serialized without using MQ?

You have to properly setup ifb/mq like in :

# netem based setup, installed at receiver side only
ETH=eth0
IFB=ifb10
#DELAY="delay 100ms"
EST="est 1sec 4sec"
#REORDER=1000us
#LOSS="loss 2.0"
TXQ=24  # change this to number of TX queues on the physical NIC

ip link add $IFB numtxqueues $TXQ type ifb
ip link set dev $IFB up

tc qdisc del dev $ETH ingress 2>/dev/null
tc qdisc add dev $ETH ingress 2>/dev/null

tc filter add dev $ETH parent ffff: \
   protocol ip u32 match u32 0 0 flowid 1:1 \
	action mirred egress redirect dev $IFB

tc qdisc del dev $IFB root 2>/dev/null

tc qdisc add dev $IFB root handle 1: mq
for i in `seq 1 $TXQ`
do
 slot=$( printf %x $(( i )) )
 tc qd add dev $IFB parent 1:$slot $EST netem \
	limit 100000 $DELAY $REORDER $LOSS
done


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ