[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4B0A6452.2020508@bigtelecom.ru>
Date: Mon, 23 Nov 2009 13:30:42 +0300
From: Badalian Vyacheslav <slavon@...telecom.ru>
To: Eric Dumazet <eric.dumazet@...il.com>
CC: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@...el.com>,
Linux Netdev List <netdev@...r.kernel.org>
Subject: Re: ixgbe question
Hello Eric. I paly with this card 3 weeks and maybe help for you :)
By default intel flower use only first cpu. Its strange.
If we add affinity to single cpu core for interrupt its will use this CPU core.
If we add affinity to two or more cpus its applying but don't work.
See ixgbe driver README from intel.com. Its have param for RSS flower. I think its do this :)
Also driver from intel.com have script for split RxTx->Cpu core but you must replace "tx rx" in code to "TxRx".
P.S. Please also see if you can and wont:
On e1000 and x86 kernel + 2xXeon 2core my TC rules load 3 min
On ixgbe and X86_64 kernel + 4xXeon 6core my TC rules load more 15 mins!
Its 64 bit regression?
Tc rules i can send to you if you ask me for it! Thanks!
Slavon
> Hi Peter
>
> I tried a pktgen stress on 82599EB card and could not split RX load on multiple cpus.
>
> Setup is :
>
> One 82599 card with fiber0 looped to fiber1, 10Gb link mode.
> machine is a HPDL380 G6 with dual quadcore E5530 @2.4GHz (16 logical cpus)
>
> I use one pktgen thread sending to fiber0 one many dst IP, and checked that fiber1
> was using many RX queues :
>
> grep fiber1 /proc/interrupts
> 117: 1301 13060 0 0 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge fiber1-TxRx-0
> 118: 601 1402 0 0 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge fiber1-TxRx-1
> 119: 634 832 0 0 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge fiber1-TxRx-2
> 120: 601 1303 0 0 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge fiber1-TxRx-3
> 121: 620 1246 0 0 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge fiber1-TxRx-4
> 122: 1287 13088 0 0 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge fiber1-TxRx-5
> 123: 606 1354 0 0 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge fiber1-TxRx-6
> 124: 653 827 0 0 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge fiber1-TxRx-7
> 125: 639 825 0 0 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge fiber1-TxRx-8
> 126: 596 1199 0 0 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge fiber1-TxRx-9
> 127: 2013 24800 0 0 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge fiber1-TxRx-10
> 128: 648 1353 0 0 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge fiber1-TxRx-11
> 129: 601 1123 0 0 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge fiber1-TxRx-12
> 130: 625 834 0 0 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge fiber1-TxRx-13
> 131: 665 1409 0 0 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge fiber1-TxRx-14
> 132: 2637 31699 0 0 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge fiber1-TxRx-15
> 133: 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge fiber1:lsc
>
>
>
> But only one CPU (CPU1) had a softirq running, 100%, and many frames were dropped
>
> root@...odl380g6:/usr/src# ifconfig fiber0
> fiber0 Link encap:Ethernet HWaddr 00:1b:21:4a:fe:54
> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
> Packets reçus:4 erreurs:0 :0 overruns:0 frame:0
> TX packets:309291576 errors:0 dropped:0 overruns:0 carrier:0
> collisions:0 lg file transmission:1000
> Octets reçus:1368 (1.3 KB) Octets transmis:18557495682 (18.5 GB)
>
> root@...odl380g6:/usr/src# ifconfig fiber1
> fiber1 Link encap:Ethernet HWaddr 00:1b:21:4a:fe:55
> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
> Packets reçus:55122164 erreurs:0 :254169411 overruns:0 frame:0
> TX packets:4 errors:0 dropped:0 overruns:0 carrier:0
> collisions:0 lg file transmission:1000
> Octets reçus:3307330968 (3.3 GB) Octets transmis:1368 (1.3 KB)
>
>
> How and when multi queue rx can really start to use several cpus ?
>
> Thanks
> Eric
>
>
> pktgen script :
>
> pgset()
> {
> local result
>
> echo $1 > $PGDEV
>
> result=`cat $PGDEV | fgrep "Result: OK:"`
> if [ "$result" = "" ]; then
> cat $PGDEV | fgrep Result:
> fi
> }
>
> pg()
> {
> echo inject > $PGDEV
> cat $PGDEV
> }
>
>
> PGDEV=/proc/net/pktgen/kpktgend_4
>
> echo "Adding fiber0"
> pgset "add_device fiber0@0"
>
>
> CLONE_SKB="clone_skb 15"
>
> PKT_SIZE="pkt_size 60"
>
>
> COUNT="count 100000000"
> DELAY="delay 0"
>
> PGDEV=/proc/net/pktgen/fiber0@0
> echo "Configuring $PGDEV"
> pgset "$COUNT"
> pgset "$CLONE_SKB"
> pgset "$PKT_SIZE"
> pgset "$DELAY"
> pgset "queue_map_min 0"
> pgset "queue_map_max 7"
> pgset "dst_min 192.168.0.2"
> pgset "dst_max 192.168.0.250"
> pgset "src_min 192.168.0.1"
> pgset "src_max 192.168.0.1"
> pgset "dst_mac 00:1b:21:4a:fe:55"
>
>
> # Time to run
> PGDEV=/proc/net/pktgen/pgctrl
>
> echo "Running... ctrl^C to stop"
> pgset "start"
> echo "Done"
>
> # Result can be vieved in /proc/net/pktgen/fiber0@0
>
> for f in fiber0@0
> do
> cat /proc/net/pktgen/$f
> done
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists