lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 16 Oct 2010 05:02:19 +1100
From:	Nick Piggin <npiggin@...nel.dk>
To:	Christoph Hellwig <hch@...radead.org>
Cc:	Nick Piggin <npiggin@...nel.dk>,
	Dave Chinner <david@...morbit.com>,
	linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 17/18] fs: icache remove inode_lock

On Fri, Oct 15, 2010 at 01:52:47PM -0400, Christoph Hellwig wrote:
> Thanks for trying to get back to a technical discussion.  Maybe we
> can just move the technical comments to direct replies to the patches
> and leave this not very helpful subthread behind?

I had plenty of other technical comments which you ignored about
per-zone shrinkers and stragegy to merge the overall series. If you
are going to keep claiming these are show stoppers, please address
my comments there too.

 
> > rcu_read_lock();
> > list_for_each_entry(inode, list) {
> >     if (inode->blah ...) {
> >       spin_lock(&list_lock);
> >       if (unlikely(list_empty(&inode->i_list)))
> >         continue;
> >       do_something(inode);
> >    }
> > }
> 
> But that't not what we do for icache.  For the validity checking
> during lookup we check the I_FREEING bit, which is modified under
> i_lock and can be read without any locking.  So everything is just
> fine when moving on to RCU locking.

For those lookups where you are taking the i_lock anyway, they
will look the same, except the i_lock lock width reduction
loses the ability to lock all icache state of the inode (like
we can practically do today with inode_lock).

This was a key consideration for maintainability for me. I also
spelled out quite clearly the i_lock reduction approach for
future speedup if the single atomic in the slowpath is really
that important. But for those kinds of complications to the
locking rules I want to add as individual small patches to the
end of the series where the streamlining work is happening. If
they are not worth the gains, I much prefer to keep the locking
more regular.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ