lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <49ECD5E4.60100@cosmosbay.com>
Date:	Mon, 20 Apr 2009 22:07:00 +0200
From:	Eric Dumazet <dada1@...mosbay.com>
To:	Christoph Lameter <cl@...ux.com>
CC:	David Miller <davem@...emloft.net>,
	Michael Chan <mchan@...adcom.com>,
	Ben Hutchings <bhutchings@...arflare.com>,
	netdev@...r.kernel.org
Subject: Re: Network latency regressions from 2.6.22 to 2.6.29 (results with
 IRQ affinity)

Christoph Lameter a écrit :
> On Mon, 20 Apr 2009, Eric Dumazet wrote:
> 
>> Point is that even with tcpdump running, latencies are very good on 2.6.30-rc2, and were very good
>> with 2.6.22. I see no significant increase/decrease...
> 
> Well okay that applies to your testing methodology but the statement that
> you have shown that the regression that I reported does not exist is not
> proven since you ran a different test.

I ran half the test. Receiver is OK, and this is the latency we all expect, as
service provider.

Now you can focus to the sender point.

For example, your program uses a kernel service to gather time with nanosecond precision.

Maybe there is a problem with it, I dont know...

Your test has so many variables it his hard to guess which part has a problem.

Maybe this is what you wanted to show after all, and you are not really interested
to really discover what is happening. Oh well, just kidding.

I am not trying to say you are right or wrong Christoph, just trying to 
check if really linux got a regression in various past releases. So far,
I did not found some strange results on UDP path, once IRQ affinities
are fixed of course.

> 
>> 1 us is time to access about 10 false shared cache lines.
> 
> That depends on the size of the system and the number of processors
> contending for the cache line.
> 
>> 64 bit arches store less pointers/long per cache line.
>> So a 64 bit kernel could be slower on this kind of workload in the general case (if several cpus play the game)
> 
> Right. But in practice I have also seen slight performance increases due
> to the increased availability of memory and the avoidence of various 32
> bit hacks (like highmem). Plus several recent subsystems seem to be
> optimized for 64 bit like f.e. Infiniband.

Maybe, but on udpping of 40 bytes messages, I am not sure it can make a difference.

> 
> I'd still like to see udpping results on your testing rigg to get another
> datapoint. If the udpping results are not showing regressions on your
> tests then there is likely a config issue at the core of the regression
> that I am seeing here.

No changes in udpping but noise.

Also, my machines use bonding and vlans, so I probably have a litle bit of overhead
(bonding uses rwlock, not very SMP friendly...)


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ