lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0810260055070.22126@twin.jikos.cz>
Date:	Sun, 26 Oct 2008 01:10:19 +0200 (CEST)
From:	Jiri Kosina <jkosina@...e.cz>
To:	David Miller <davem@...emloft.net>
cc:	efault@....de, rjw@...k.pl, Ingo Molnar <mingo@...e.hu>,
	s0mbre@...rvice.net.ru, a.p.zijlstra@...llo.nl,
	linux-kernel@...r.kernel.org, netdev@...r.kernel.org
Subject: Re: [tbench regression fixes]: digging out smelly deadmen.

On Sat, 25 Oct 2008, David Miller wrote:

> But note that tbench performance improved a bit in 2.6.25.
> In my tests I noticed a similar effect, but from 2.6.23 to 2.6.24,
> weird.
> Just for the public record here are the numbers I got in my testing.

I have been currently looking at very similarly looking issue. For the 
public record, here are the numbers we have been able to come up with so 
far (measured with dbench, so the absolute values are slightly different, 
but still shows similar pattern)

208.4 MB/sec  -- vanilla 2.6.16.60
201.6 MB/sec  -- vanilla 2.6.20.1
172.9 MB/sec  -- vanilla 2.6.22.19
74.2 MB/sec   -- vanilla 2.6.23
 46.1 MB/sec  -- vanilla 2.6.24.2
 30.6 MB/sec  -- vanilla 2.6.26.1

I.e. huge drop for 2.6.23 (this was with default configs for each 
respective kernel).
2.6.23-rc1 shows 80.5 MB/s, i.e. a few % better than final 2.6.23, but 
still pretty bad. 

I have gone through the commits that went into -rc1 and tried to figure 
out which one could be responsible. Here are the numbers:

 85.3 MB/s for 2ba2d00363 (just before on-deman readahead has been merged)
 82.7 MB/s for 45426812d6 (before cond_resched() has been added into page 
187.7 MB/s for c1e4fe711a4 (just before CFS scheduler has been merged)
                           invalidation code)

So the current bigest suspect is CFS, but I don't have enough numbers yet 
to be able to point a finger to it with 100% certainity. Hopefully soon.

Just my $0.02

-- 
Jiri Kosina
SUSE Labs

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ