lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 2 Jul 2010 03:52:30 +1000
From:	Nick Piggin <npiggin@...e.de>
To:	Linus Torvalds <torvalds@...ux-foundation.org>
Cc:	Dave Chinner <david@...morbit.com>, linux-fsdevel@...r.kernel.org,
	linux-kernel@...r.kernel.org, John Stultz <johnstul@...ibm.com>,
	Frank Mayhar <fmayhar@...gle.com>
Subject: Re: [patch 00/52] vfs scalability patches updated

On Thu, Jul 01, 2010 at 10:35:35AM -0700, Linus Torvalds wrote:
> On Wed, Jun 30, 2010 at 5:40 AM, Nick Piggin <npiggin@...e.de> wrote:
> >>
> >> That's a pretty big ouch. Why does RCU freeing of inodes cause that
> >> much regression? The RCU freeing is out of line, so where does the big
> >> impact come from?
> >
> > That comes mostly from inability to reuse the cache-hot inode structure,
> > and the cost to go over the deferred RCU list and free them after they
> > get cache cold.
> 
> I do wonder if this isn't a big design bug.

It's possible, yes. Although a lot of that drop does come from
hitting RCU and overruning slab allocator queues. It was what,
closer to 10% when doing small numbers of creat/unlink loops.

 
> Most of the time with RCU, we don't need to wait to actually do the
> _freeing_ of the individual data structure, we only need to make sure
> that the data structure remains of the same _type_. IOW, we can free
> it (and re-use it), but the backing storage cannot be released to the
> page cache. That's what SLAB_DESTROY_BY_RCU should give us.
> 
> Is that not possible in this situation? Do we really need to keep the
> inode _identity_ around for RCU?
> 
> If you use just SLAB_DESTROY_BY_RCU, then inode re-use remains, and
> cache behavior would be much improved. The usual requirement for
> SLAB_DESTROY_BY_RCU is that you only touch a lock (and perhaps
> re-validate the identity) in the RCU-reader paths. Could that be made
> to work?

I definitely thought of that. I actually thought it would not
be possible with the store-free path walk patches though, because
we need to check some inode properties (eg. permission). So I was
thinking the usual approach of taking a per-entry lock defeats
the whole purpose of store-free path walk.

But you've got me to think about it again and it should be possible to
do just using the dentry seqlock. IOW, if the inode gets disconnected
from the dentry (and then can get possibly freed and reused) then just
retry the lookup.

It may be a little tricky. I'll wait until the path-walk code is
more polished first.

> 
> Because that 27% drop really is pretty distressing.
> 
> That said, open (of the non-creating kind), close, and stat are
> certainly more important than creating and freeing files. So as a
> trade-off, it's probably the right thing to do. But if we can get all
> the improvement _without_ that big downside, that would obviously be
> better yet.

We have actually bigger regressions than that for other code
paths. The RCU freeing for files structs causes similar, about
20-30% regression in open/close.

I actually have a (proper) patch to make that use DESTROY_BY_RCU
too. It actually slows down fd lookup by a tiny bit, though
(lock, load, branch, increment, unlock versus atomic inc). But
same number of atomic ops.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ