lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 26 Oct 2008 10:00:48 +0100
From:	Peter Zijlstra <a.p.zijlstra@...llo.nl>
To:	Mike Galbraith <efault@....de>
Cc:	Jiri Kosina <jkosina@...e.cz>, David Miller <davem@...emloft.net>,
	rjw@...k.pl, Ingo Molnar <mingo@...e.hu>, s0mbre@...rvice.net.ru,
	linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
	Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [tbench regression fixes]: digging out smelly deadmen.

On Sun, 2008-10-26 at 09:46 +0100, Mike Galbraith wrote: 
> On Sun, 2008-10-26 at 01:10 +0200, Jiri Kosina wrote:
> > On Sat, 25 Oct 2008, David Miller wrote:
> > 
> > > But note that tbench performance improved a bit in 2.6.25.
> > > In my tests I noticed a similar effect, but from 2.6.23 to 2.6.24,
> > > weird.
> > > Just for the public record here are the numbers I got in my testing.
> > 
> > I have been currently looking at very similarly looking issue. For the 
> > public record, here are the numbers we have been able to come up with so 
> > far (measured with dbench, so the absolute values are slightly different, 
> > but still shows similar pattern)
> > 
> > 208.4 MB/sec  -- vanilla 2.6.16.60
> > 201.6 MB/sec  -- vanilla 2.6.20.1
> > 172.9 MB/sec  -- vanilla 2.6.22.19
> > 74.2 MB/sec   -- vanilla 2.6.23
> >  46.1 MB/sec  -- vanilla 2.6.24.2
> >  30.6 MB/sec  -- vanilla 2.6.26.1
> > 
> > I.e. huge drop for 2.6.23 (this was with default configs for each 
> > respective kernel).
> > 2.6.23-rc1 shows 80.5 MB/s, i.e. a few % better than final 2.6.23, but 
> > still pretty bad. 
> > 
> > I have gone through the commits that went into -rc1 and tried to figure 
> > out which one could be responsible. Here are the numbers:
> > 
> >  85.3 MB/s for 2ba2d00363 (just before on-deman readahead has been merged)
> >  82.7 MB/s for 45426812d6 (before cond_resched() has been added into page 
> > 187.7 MB/s for c1e4fe711a4 (just before CFS scheduler has been merged)
> >                            invalidation code)
> > 
> > So the current bigest suspect is CFS, but I don't have enough numbers yet 
> > to be able to point a finger to it with 100% certainity. Hopefully soon.

> I reproduced this on my Q6600 box.  However, I also reproduced it with
> 2.6.22.19.  What I think you're seeing is just dbench creating a
> massive train wreck. 

wasn't dbench one of those non-benchmarks that thrives on randomness and
unfairness?

Andrew said recently:
  "dbench is pretty chaotic and it could be that a good change causes
dbench to get worse.  That's happened plenty of times in the past."

So I'm not inclined to worry too much about dbench in any way shape or
form.



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ