[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1240916310.7620.147.camel@twins>
Date: Tue, 28 Apr 2009 12:58:30 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Christoph Hellwig <hch@...radead.org>
Cc: Al Viro <viro@...IV.linux.org.uk>, npiggin@...e.de,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [patch 00/27] [rfc] vfs scalability patchset
On Tue, 2009-04-28 at 05:09 -0400, Christoph Hellwig wrote:
> On Sat, Apr 25, 2009 at 09:06:49AM +0100, Al Viro wrote:
> > Maybe... What Eric proposed is essentially a reuse of s_list for per-inode
> > list of struct file. Presumably with something like i_lock for protection.
> > So that's not a conflict.
>
> But what do we actually want it for? Right now it's only used for
> ttys, which Nick has split out, and for remount r/o. For the normal
> remount r/o case it will go away once we have proper per-sb writer
> counts. And the fource remount r/o from sysrq is completely broken.
>
> A while ago Peter had patches for files_lock scalability that went even
> further than Nicks, and if I remember the arguments correctly just
> splitting the lock wasn't really enough and he required additional
> batching because there just were too many lock roundtrips. (Peter, do
> you remember the defails?)
Suppose you have some task doing open/close on one filesystem (rather
common scenario) then having the lock split on superblock level doesn't
help you.
My patches were admittedly somewhat over the top, and they could cause
more cacheline bounces but significantly reduce the contention,
delivering an over-all improvement, as can be seen from the
micro-benchmark results posted in that thread.
Anyway, your solution of simply removing all uses of the global files
list still seems like the most attractive option.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists