[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4A6A2125329CFD4D8CC40C9E8ABCAB9F249D5F33D3@MILEXCH2.ds.jdsu.net>
Date: Wed, 22 Dec 2010 08:52:55 -0800
From: Jon Zhou <Jon.Zhou@...u.com>
To: Eric Dumazet <eric.dumazet@...il.com>
CC: "juice@...gman.org" <juice@...gman.org>,
Stephen Hemminger <shemminger@...tta.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: RE: Using ethernet device as efficient small packet generator
-----Original Message-----
From: Eric Dumazet [mailto:eric.dumazet@...il.com]
Sent: Wednesday, December 22, 2010 11:59 PM
To: Jon Zhou
Cc: juice@...gman.org; Stephen Hemminger; netdev@...r.kernel.org
Subject: RE: Using ethernet device as efficient small packet generator
Le mercredi 22 décembre 2010 à 07:48 -0800, Jon Zhou a écrit :
>
> Hi eric, any special setting in pktgen.conf?
>
> PGDEV=/proc/net/pktgen/kpktgend_0
> echo "Removing all devices"
> pgset "rem_device_all"
> echo "Adding eth1-fp-0" //or eth1?
eth1
> pgset "add_device eth1"
> echo "Setting max_before_softirq 10000"
> pgset "max_before_softirq 10000"
Not sure you need to tweak max_before_softirq (I never did)
>
> All things I need to do is set cpu affinity and start 8 pktgen threads? (PGDEV=/proc/net/pktgen/kpktgend_0~7 with "eth1")
Yes, but you must also use queue_map_min and queue_map_max pktgen
parameters so that each cpu manipulates its own 'queue'
CPU 0 :
pgset "queue_map_min 0"
pgset "queue_map_max 0"
...
CPU 3 :
pgset "queue_map_min 3"
pgset "queue_map_max 3"
PGDEV=/proc/net/pktgen/kpktgend_0
echo "Removing all devices"
pgset "rem_device_all"
echo "Adding eth4"
pgset "add_device eth4"
echo "Setting max_before_softirq 10000"
pgset "queue_map_min 0"
pgset "queue_map_max 0"
-->
It said:
queue_map_min 0
./pktgen.conf-8-1: line 10: echo: write error: Invalid argument
queue_map_max 0
./pktgen.conf-8-1: line 10: echo: write error: Invalid argument
PGDEV=/proc/net/pktgen/eth4
echo "Configuring $PGDEV"
pgset "$COUNT"
pgset "$CLONE_SKB"
pgset "$PKT_SIZE"
pgset "$DELAY"
pgset "dst 10.10.11.2"
pgset "queue_map_min 0"
pgset "queue_map_max 7"
pgset "dst_mac 00:04:23:08:91:dc"
->it is ok
Here is the top result, why only kpktgend_0 is running?
Eric,can you share the pktgen script? Thank you
top - 00:43:59 up 7:00, 6 users, load average: 0.95, 0.66, 0.51
Tasks: 8 total, 1 running, 7 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 3.2%sy, 0.0%ni, 86.1%id, 0.1%wa, 0.0%hi, 10.6%si, 0.0%st
Mem: 32228M total, 933M used, 31295M free, 97M buffers
Swap: 2055M total, 0M used, 2055M free, 138M cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
8806 root 20 0 0 0 0 R 100 0.0 5:27.12 kpktgend_0
8807 root 20 0 0 0 0 S 0 0.0 0:00.00 kpktgend_1
8808 root 20 0 0 0 0 S 0 0.0 0:00.00 kpktgend_2
8810 root 20 0 0 0 0 S 0 0.0 0:00.00 kpktgend_3
8811 root 20 0 0 0 0 S 0 0.0 0:00.00 kpktgend_4
8812 root 20 0 0 0 0 S 0 0.0 0:00.00 kpktgend_5
8813 root 20 0 0 0 0 S 0 0.0 0:00.00 kpktgend_6
8814 root 20 0 0 0 0 S 0 0.0 0:00.00 kpktgend_7
Already set affinity:
cat /proc/interrupts |grep eth4
78: 10625257 0 0 0 0 0 0 0 IR-PCI-MSI-edge eth4-TxRx-0
79: 10451 581007 0 0 0 0 0 0 IR-PCI-MSI-edge eth4-TxRx-1
80: 10447 0 535185 0 0 0 0 0 IR-PCI-MSI-edge eth4-TxRx-2
81: 10441 0 0 575911 0 0 0 0 IR-PCI-MSI-edge eth4-TxRx-3
82: 10444 0 0 0 521068 0 0 0 IR-PCI-MSI-edge eth4-TxRx-4
83: 10448 0 0 0 0 564710 0 0 IR-PCI-MSI-edge eth4-TxRx-5
84: 10429 0 0 0 0 0 516087 0 IR-PCI-MSI-edge eth4-TxRx-6
85: 10444 0 0 0 0 0 0 558530 IR-PCI-MSI-edge eth4-TxRx-7
86: 2 0 0 0 0 0 0 0 IR-PCI-MSI-edge eth4:lsc
Powered by blists - more mailing lists