lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <db6e44833b795051ce612ce26bed38a75bc7623a.camel@kernel.org>
Date:   Fri, 01 Apr 2022 05:29:20 -0400
From:   Jeff Layton <jlayton@...nel.org>
To:     Dave Chinner <david@...morbit.com>
Cc:     viro@...iv.linux.org.uk, ceph-devel@...r.kernel.org,
        linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] fs: change test in inode_insert5 for adding to the sb
 list

On Fri, 2022-04-01 at 14:53 +1100, Dave Chinner wrote:
> On Thu, Mar 31, 2022 at 06:56:32PM -0400, Jeff Layton wrote:
> > The inode_insert5 currently looks at I_CREATING to decide whether to
> > insert the inode into the sb list. This test is a bit ambiguous though
> > as I_CREATING state is not directly related to that list.
> > 
> > This test is also problematic for some upcoming ceph changes to add
> > fscrypt support. We need to be able to allocate an inode using new_inode
> > and insert it into the hash later if we end up using it, and doing that
> > now means that we double add it and corrupt the list.
> > 
> > What we really want to know in this test is whether the inode is already
> > in its superblock list, and then add it if it isn't. Have it test for
> > list_empty instead and ensure that we always initialize the list by
> > doing it in inode_init_once. It's only ever removed from the list with
> > list_del_init, so that should be sufficient.
> > 
> > Suggested-by: Al Viro <viro@...iv.linux.org.uk>
> > Signed-off-by: Jeff Layton <jlayton@...nel.org>
> > ---
> >  fs/inode.c | 11 ++++++++---
> >  1 file changed, 8 insertions(+), 3 deletions(-)
> > 
> > This is the alternate approach that Al suggested to me on IRC. I think
> > this is likely to be more robust in the long run, and we can avoid
> > exporting another symbol.
> 
> Looks good to me.
> 
> Reviewed-by: Dave Chinner <dchinner@...hat.com>
> 
> FWIW, I'm getting ready to resend patches originally written by
> Waiman Long years ago to convert the inode sb list to a different
> structure (per-cpu lists) for scalability reasons, but is still
> allows using list-empty() to check if the inode is on the list or
> not so I dont' see a problem with this change at all.
> 

Thanks, Dave.

> > Al, if you're ok with this, would you mind taking this in via your tree?
> > I'd like to see this in sit in linux-next for a bit so we can see if any
> > benchmarks get dinged.
> 
> I think that is unlikely - the sb inode list just doesn't show up in
> profiles until you are pushing several hundred thousand inodes a
> second through the inode cache and there really aren't a lot of
> worklaods out there that do that. At that point, sb list lock
> contention becomes the issue, not the requirement to add in-use
> inodes to the sb list...
> 

My (minor) concern was that since we're now initializing this list for
all allocations, not just in new_inode, it could potentially slow down
some callers. I agree that it seems pretty unlikely to be an issue
though.

> e.g. concurrent 'find <...> -ctime' operations on XFS hit sb list
> lock contention limits at about 600,000 inodes/s being,
> instantiated, stat()d and reclaimed from memory. With
> Waiman's dlist code I mention above, it'll do 1.5 million inodes/s
> for the same CPU usage.  And a concurrent bulkstat workload goes
> from 600,000 inodes/s to over 6 million inodes/s for the same
> CPU usage.  That bulkstat workload is hitting memory reclaim
> scalability limits as I'm turning over ~12GB/s of cached memory on a
> machine with only 16GB RAM...
> 
> Cheers,
> 
> Dave.

-- 
Jeff Layton <jlayton@...nel.org>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ