lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 12 Oct 2009 10:20:04 +0200
From:	Jens Axboe <jens.axboe@...cle.com>
To:	Nick Piggin <npiggin@...e.de>
Cc:	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	linux-fsdevel@...r.kernel.org,
	Ravikiran G Thirumalai <kiran@...lex86.org>,
	Peter Zijlstra <peterz@...radead.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	samba-technical@...ts.samba.org
Subject: Re: [rfc][patch] store-free path walking

On Mon, Oct 12 2009, Nick Piggin wrote:
> On Mon, Oct 12, 2009 at 05:58:43AM +0200, Nick Piggin wrote:
> > On Wed, Oct 07, 2009 at 11:56:57AM +0200, Jens Axboe wrote:
> > Try changing the 'statvfs' syscall in dbench to 'statfs'.
> > glibc has to do some nasty stuff parsing /proc/mounts to
> > make statvfs work. On my 2s8c opteron it goes like this:
> > clients     vanilla kernel     vfs scale (MB/s)
> > 1            476                447
> > 2           1092               1128
> > 4           2027               2260
> > 8           2398               4200
> > 
> > Single threaded performance isn't as good so I need to look
> > at the reasons for that :(. But it's practically linearly
> > scalable now. The dropoff at 8 I'd say is probably due to
> > the memory controllers running out of steam rather than
> > cacheline or lock contention.
> 
> Ah, no on a bigger machine it starts slowing down again due
> to shared cwd contention, possibly due to creat/unlink type
> events. This could be improved by not restarting the entire
> path walk when we run into trouble but just trying to proceed
> from the last successful element.
> 
I was starting to do a few runs, but there's something funky going on
here. The throughput rates are consistent throughout a single run, but
not at all between runs. I suspect this may be due to CPU placement.
The numbers also look pretty odd, here's an example from a patched
kernel with dbench using statfs:

Clients         Patched
------------------------
1                1.00
2                1.23
4                2.96
8                1.22
16               0.89
32               0.83
64               0.83

So while the numbers fluctuate by as much as 20% from run to run.

OK, so it seems FAIR_SLEEPERS sched feature is responsible for this, if
I turn that off I get more consistent numbers. Below table is -git vs
vfs patches on -git. Baseline is -git with 1 client, > 1.00 is faster
and vice versa.

Clients         Vanilla         VFS scale
-----------------------------------------
1                1.00            0.96
2                1.69            1.71
4                2.16            2.98
8                0.99            1.00
16               0.90            0.85

As you can see, it's still quickling spiralling into most of the time (>
95%) spinning on a lock and killing scaling.

> Anyway, if you do get a chance to run dbench with this
> modification, I would appreciate seeing a profile with clal
> traces (my bigger system is ia64 which doesn't do perf yet).

For what number of clients?

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ