[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20091012055920.GD25882@wotan.suse.de>
Date: Mon, 12 Oct 2009 07:59:20 +0200
From: Nick Piggin <npiggin@...e.de>
To: Jens Axboe <jens.axboe@...cle.com>
Cc: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-fsdevel@...r.kernel.org,
Ravikiran G Thirumalai <kiran@...lex86.org>,
Peter Zijlstra <peterz@...radead.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
samba-technical@...ts.samba.org
Subject: Re: [rfc][patch] store-free path walking
On Mon, Oct 12, 2009 at 05:58:43AM +0200, Nick Piggin wrote:
> On Wed, Oct 07, 2009 at 11:56:57AM +0200, Jens Axboe wrote:
> Try changing the 'statvfs' syscall in dbench to 'statfs'.
> glibc has to do some nasty stuff parsing /proc/mounts to
> make statvfs work. On my 2s8c opteron it goes like this:
> clients vanilla kernel vfs scale (MB/s)
> 1 476 447
> 2 1092 1128
> 4 2027 2260
> 8 2398 4200
>
> Single threaded performance isn't as good so I need to look
> at the reasons for that :(. But it's practically linearly
> scalable now. The dropoff at 8 I'd say is probably due to
> the memory controllers running out of steam rather than
> cacheline or lock contention.
Ah, no on a bigger machine it starts slowing down again due
to shared cwd contention, possibly due to creat/unlink type
events. This could be improved by not restarting the entire
path walk when we run into trouble but just trying to proceed
from the last successful element.
Anyway, if you do get a chance to run dbench with this
modification, I would appreciate seeing a profile with clal
traces (my bigger system is ia64 which doesn't do perf yet).
Thanks,
Nick
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists