lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 31 Jan 2008 16:12:19 +0100
From:	Carsten Aulbert <carsten.aulbert@....mpg.de>
To:	"Brandeburg, Jesse" <jesse.brandeburg@...el.com>
CC:	Bruce Allen <ballen@...vity.phys.uwm.edu>, netdev@...r.kernel.org,
	Henning Fehrmann <henning.fehrmann@....mpg.de>,
	Bruce Allen <bruce.allen@....mpg.de>
Subject: Re: e1000 full-duplex TCP performance well below wire speed

Hi all, slowly crawling through the mails.

Brandeburg, Jesse wrote:

>>>> The test was done with various mtu sizes ranging from 1500 to 9000,
>>>> with ethernet flow control switched on and off, and using reno and
>>>> cubic as a TCP congestion control.
>>> As asked in LKML thread, please post the exact netperf command used
>>> to start the client/server, whether or not you're using irqbalanced
>>> (aka irqbalance) and what cat /proc/interrupts looks like (you ARE
>>> using MSI, right?)

We are using MSI, /proc/interrupts look like:
n0003:~# cat /proc/interrupts
            CPU0       CPU1       CPU2       CPU3
   0:    6536963          0          0          0   IO-APIC-edge      timer
   1:          2          0          0          0   IO-APIC-edge      i8042
   3:          1          0          0          0   IO-APIC-edge      serial
   8:          0          0          0          0   IO-APIC-edge      rtc
   9:          0          0          0          0   IO-APIC-fasteoi   acpi
  14:      32321          0          0          0   IO-APIC-edge      libata
  15:          0          0          0          0   IO-APIC-edge      libata
  16:          0          0          0          0   IO-APIC-fasteoi 
uhci_hcd:usb5
  18:          0          0          0          0   IO-APIC-fasteoi 
uhci_hcd:usb4
  19:          0          0          0          0   IO-APIC-fasteoi 
uhci_hcd:usb3
  23:          0          0          0          0   IO-APIC-fasteoi 
ehci_hcd:usb1, uhci_hcd:usb2
378:   17234866          0          0          0   PCI-MSI-edge      eth1
379:     129826          0          0          0   PCI-MSI-edge      eth0
NMI:          0          0          0          0
LOC:    6537181    6537326    6537149    6537052
ERR:          0

(sorry for the line break).

What we don't understand is why only core0 gets the interrupts, since 
the affinity is set to f:
# cat /proc/irq/378/smp_affinity
f

Right now, irqbalance is not running, though I can give it shot if 
people think this will make a difference.

> I would suggest you try TCP_RR with a command line something like this:
> netperf -t TCP_RR -H <hostname> -C -c -- -b 4 -r 64K

I did that and the results can be found here:
https://n0.aei.uni-hannover.de/wiki/index.php/NetworkTest

The results with netperf running like
netperf -t TCP_STREAM -H <host> -l 20
can be found here:
https://n0.aei.uni-hannover.de/wiki/index.php/NetworkTestNetperf1

I reran the tests with
netperf -t <test> -H <host> -l 20 -c -C
or in the case of TCP_RR with the suggested burst settings -b 4 -r 64k


> Yes, InterruptThrottleRate=8000 means there will be no more than 8000
> ints/second from that adapter, and if interrupts are generated faster
> than that they are "aggregated."
> 
> Interestingly since you are interested in ultra low latency, and may be
> willing to give up some cpu for it during bulk transfers you should try
> InterruptThrottleRate=1 (can generate up to 70000 ints/s)
> 

On the web page you'll see that there are about 4000 interrupts/s for 
most tests and up to 20,000/s for the TCP_RR test. Shall I change the 
throttle rate?

>>> just for completeness can you post the dump of ethtool -e eth0 and
>>> lspci -vvv?
>> Yup, we'll give that info also.

n0002:~# ethtool -e eth1
Offset          Values
------          ------
0x0000          00 30 48 93 94 2d 20 0d 46 f7 57 00 ff ff ff ff
0x0010          ff ff ff ff 6b 02 9a 10 d9 15 9a 10 86 80 df 80
0x0020          00 00 00 20 54 7e 00 00 00 10 da 00 04 00 00 27
0x0030          c9 6c 50 31 32 07 0b 04 84 29 00 00 00 c0 06 07
0x0040          08 10 00 00 04 0f ff 7f 01 4d ff ff ff ff ff ff
0x0050          14 00 1d 00 14 00 1d 00 af aa 1e 00 00 00 1d 00
0x0060          00 01 00 40 1e 12 ff ff ff ff ff ff ff ff ff ff
0x0070          ff ff ff ff ff ff ff ff ff ff ff ff ff ff cf 2f

lspci -vvv for this card:
0e:00.0 Ethernet controller: Intel Corporation 82573L Gigabit Ethernet 
Controller
         Subsystem: Super Micro Computer Inc Unknown device 109a
         Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- 
ParErr- Stepping- SERR+ FastB2B-
         Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- 
<TAbort- <MAbort- >SERR- <PERR-
         Latency: 0, Cache Line Size: 64 bytes
         Interrupt: pin A routed to IRQ 378
         Region 0: Memory at ee200000 (32-bit, non-prefetchable) [size=128K]
         Region 2: I/O ports at 5000 [size=32]
         Capabilities: [c8] Power Management version 2
                 Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA 
PME(D0+,D1-,D2-,D3hot+,D3cold+)
                 Status: D0 PME-Enable- DSel=0 DScale=1 PME-
         Capabilities: [d0] Message Signalled Interrupts: Mask- 64bit+ 
Queue=0/0 Enable+
                 Address: 00000000fee0f00c  Data: 41b9
         Capabilities: [e0] Express Endpoint IRQ 0
                 Device: Supported: MaxPayload 256 bytes, PhantFunc 0, 
ExtTag-
                 Device: Latency L0s <512ns, L1 <64us
                 Device: AtnBtn- AtnInd- PwrInd-
                 Device: Errors: Correctable- Non-Fatal- Fatal- Unsupported-
                 Device: RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+
                 Device: MaxPayload 128 bytes, MaxReadReq 512 bytes
                 Link: Supported Speed 2.5Gb/s, Width x1, ASPM unknown, 
Port 0
                 Link: Latency L0s <128ns, L1 <64us
                 Link: ASPM Disabled RCB 64 bytes CommClk- ExtSynch-
                 Link: Speed 2.5Gb/s, Width x1
         Capabilities: [100] Advanced Error Reporting
         Capabilities: [140] Device Serial Number 2d-94-93-ff-ff-48-30-00

(all lspci-vvv output as attachment)

Thanks a lot, open for suggestions

Carsten


View attachment "lspci" of type "text/plain" (16538 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ