[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <m1tz4ctuu7.fsf@fess.ebiederm.org>
Date: Sat, 25 Apr 2009 12:08:16 -0700
From: ebiederm@...ssion.com (Eric W. Biederman)
To: Christoph Hellwig <hch@...radead.org>
Cc: Al Viro <viro@...IV.linux.org.uk>, npiggin@...e.de,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [patch 00/27] [rfc] vfs scalability patchset
Christoph Hellwig <hch@...radead.org> writes:
> On Sat, Apr 25, 2009 at 05:18:29AM +0100, Al Viro wrote:
>> However, files_lock part 2 looks very dubious - if nothing else, I would
>> expect that you'll get *more* cross-CPU traffic that way, since the CPU
>> where final fput() runs will correlate only weakly (if at all) with one
>> where open() had been done. So you are getting more cachelines bouncing.
>> I want to see the numbers for this one, and on different kinds of loads,
>> but as it is I've very sceptical. BTW, could you try to collect stats
>> along the lines of "CPU #i has done N_{i,j} removals from sb list for
>> files that had been in list #j"?
>>
>> Splitting files_lock on per-sb basis might be an interesting variant, too.
>
> We should just kill files_lock and s_files completely. The remaining
> user are may remount r/o checks, and with counters in place not only on
> the vfsmount but also on the superblock we can kill fs_may_remount_ro in
> it's current form.
Can we? My first glance at that code I asked myself if we could examine
i_writecount, instead of going to the file. My impression was that we
were deliberately only counting persistent write references from files
instead of transient write references. As only the persistent write
references matter. Transient write references can at least in theory
be flushed as the filesystem is remounting read-only.
Eric
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists