lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4A182066.9030201@googlemail.com>
Date:	Sat, 23 May 2009 18:12:22 +0200
From:	Michael Riepe <michael.riepe@...glemail.com>
To:	David Dillow <dave@...dillows.org>
CC:	Michael Buesch <mb@...sch.de>,
	Francois Romieu <romieu@...zoreil.com>,
	Rui Santos <rsantos@...popie.com>,
	Michael Büker <m.bueker@...lin.de>,
	linux-kernel@...r.kernel.org, netdev@...r.kernel.org
Subject: Re: [PATCH 2.6.30-rc4] r8169: avoid losing MSI interrupts

Hi!

David Dillow wrote:

> I wonder if that is the TCP sawtooth pattern -- run up until we drop
> packets, drop off, repeat. I thought newer congestion algorithms would
> help with that, but I've not kept up, this may be another red-herring --
> like the bisection into genirq.

Actually, I just found out that things are much stranger. A freshly
booted system (I'm using 2.6.29.2 + the r8169 patch sent by Michael
Buesch, by the way) behaves like this:

[  3] local 192.168.178.206 port 44090 connected with 192.168.178.204
port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec    483 MBytes    405 Mbits/sec
[  3] 10.0-20.0 sec    472 MBytes    396 Mbits/sec
[  3] 20.0-30.0 sec    482 MBytes    404 Mbits/sec
[  3] 30.0-40.0 sec    483 MBytes    405 Mbits/sec
[  3] 40.0-50.0 sec    480 MBytes    402 Mbits/sec
[  3] 50.0-60.0 sec    479 MBytes    402 Mbits/sec
[  3]  0.0-60.0 sec  2.81 GBytes    402 Mbits/sec

Then I've been running another test, something along the lines of

	for dest in host1 host1 host2 host2
	do ssh $dest dd of=/dev/null bs=8k count=10240000 </dev/zero &
	done

After a while, I killed the ssh processes and ran iperf again. And this
time, I got:

[  3] local 192.168.178.206 port 58029 connected with 192.168.178.204
port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec    634 MBytes    531 Mbits/sec
[  3] 10.0-20.0 sec    740 MBytes    621 Mbits/sec
[  3] 20.0-30.0 sec    641 MBytes    538 Mbits/sec
[  3] 30.0-40.0 sec    738 MBytes    619 Mbits/sec
[  3] 40.0-50.0 sec    742 MBytes    622 Mbits/sec
[  3] 50.0-60.0 sec    743 MBytes    623 Mbits/sec
[  3]  0.0-60.0 sec  4.14 GBytes    592 Mbits/sec

Obviously, the high-load ssh test (which would kill the device within a
few seconds without the patch) triggers something here.

A few observations later, however, I was convinced that it's not a TCP
congestion or driver issue. Actually, the throughput depends on the CPU
the benchmark is running on. You can see that in gkrellm - whenever the
process jumps to another CPU, the throughput changes. On the four
(virtual) CPUs of the Atom 330, I get these results:

CPU 0:  0.0-60.0 sec  2.65 GBytes    380 Mbits/sec
CPU 1:  0.0-60.0 sec  4.12 GBytes    590 Mbits/sec
CPU 2:  0.0-60.0 sec  3.79 GBytes    543 Mbits/sec
CPU 3:  0.0-60.0 sec  4.13 GBytes    592 Mbits/sec

CPU 0+2 are on the first core, 1+3 on the second.

If I use two connections (iperf -P2) and nail iperf to both threads of a
single core with taskset (the program is multi-threaded, just in case
you wonder), I get this:

CPU 0+2:  0.0-60.0 sec  4.65 GBytes    665 Mbits/sec
CPU 1+3:  0.0-60.0 sec  6.43 GBytes    920 Mbits/sec

That's quite a difference, isn't it?

Now I wonder what CPU 0 is doing...

-- 
Michael "Tired" Riepe <michael.riepe@...glemail.com>
X-Tired: Each morning I get up I die a little
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ