lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100701075426.GC22976@laptop>
Date:	Thu, 1 Jul 2010 17:54:26 +1000
From:	Nick Piggin <npiggin@...e.de>
To:	Dave Chinner <david@...morbit.com>
Cc:	linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
	John Stultz <johnstul@...ibm.com>,
	Frank Mayhar <fmayhar@...gle.com>
Subject: Re: [patch 29/52] fs: icache lock i_count

On Thu, Jul 01, 2010 at 12:36:18PM +1000, Dave Chinner wrote:
> On Wed, Jun 30, 2010 at 10:05:02PM +1000, Nick Piggin wrote:
> > On Wed, Jun 30, 2010 at 05:27:02PM +1000, Dave Chinner wrote:
> > > On Thu, Jun 24, 2010 at 01:02:41PM +1000, npiggin@...e.de wrote:
> > > > Protect inode->i_count with i_lock, rather than having it atomic.
> > > > Next step should also be to move things together (eg. the refcount increment
> > > > into d_instantiate, which will remove a lock/unlock cycle on i_lock).
> > > .....
> > > > Index: linux-2.6/fs/inode.c
> > > > ===================================================================
> > > > --- linux-2.6.orig/fs/inode.c
> > > > +++ linux-2.6/fs/inode.c
> > > > @@ -33,14 +33,13 @@
> > > >   * inode_hash_lock protects:
> > > >   *   inode hash table, i_hash
> > > >   * inode->i_lock protects:
> > > > - *   i_state
> > > > + *   i_state, i_count
> > > >   *
> > > >   * Ordering:
> > > >   * inode_lock
> > > >   *   sb_inode_list_lock
> > > >   *     inode->i_lock
> > > > - * inode_lock
> > > > - *   inode_hash_lock
> > > > + *       inode_hash_lock
> > > >   */
> > > 
> > > I thought that the rule governing the use of inode->i_lock was that
> > > it can be used anywhere as long as it is the innermost lock.
> > > 
> > > Hmmm, no references in the code or documentation. Google gives a
> > > pretty good reference:
> > > 
> > > http://www.mail-archive.com/linux-ext4@vger.kernel.org/msg02584.html
> > > 
> > > Perhaps a different/new lock needs to be used here?
> > 
> > Well I just changed the order (and documented it to boot :)). It's
> > pretty easy to verify that LOR is no problem. inode hash is only
> > taken in a very few places so other code outside inode.c is fine to
> > use i_lock as an innermost lock.
> 
> It's not just the inode_hash_lock - you move four or five other
> locks under inode->i_lock as the series progresses. IOWs, there's
> now many paths and locking orders where the i_lock is not innermost.
> If we go forward with this, it's only going to get more complex and
> eventually somewhere we'll need a new lock for an innermost
> operation because inode->i_lock is no longer safe to use....

OK yes it's more than one lock, but I don't quite see the problem.
The locks are mostly confined to inode.c and fs-writeback.c, and
filesystems can basically use i_lock as inner most for their purposes.
If they get it wrong, lockdep will tell them pretty quick. And it's
documented to boot.

 
> Seriously: use a new lock for high level inode operations you are
> optimising - don't repurpose an existing lock with different usage
> rules just because it's convenient.

That's what scalability development is all about, I'm afraid. Just
adding more and more locks is what makes things more complex, so
you have to juggle around or change locks when possible. If there is a
difficulty with locking pops up in future, I'd prefer to look at it
then.

I don't think any filesystems cared at all when I converted them.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ