[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20131003204142.GL13318@ZenIV.linux.org.uk>
Date: Thu, 3 Oct 2013 21:41:42 +0100
From: Al Viro <viro@...IV.linux.org.uk>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: linux-fsdevel <linux-fsdevel@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 17/17] RCU'd vfsmounts
On Thu, Oct 03, 2013 at 01:19:16PM -0700, Linus Torvalds wrote:
> Hmm. The CPU2 mntput can only happen under RCU readlock, right? After
> the RCU grace period _and_ if the umount is going ahead, nothing
> should have a mnt pointer, right?
umount -l doesn't care.
> So I'm wondering if you couldn't just have a synchronize_rcu() in that
> umount path, after clearing mnt_ns. At that point you _know_ you're
> the only one that should have access to the mnt.
We have it there. See namespace_unlock(). And you are right about the
locking rules for umount_tree(), except that caller is responsible
for dropping those. With (potentially final) mntput() happening after
both (well, as part of namespace_unlock(), done after synchronize_rcu()).
The problem is this:
A = 1, B = 1
CPU1:
A = 0
<full barrier>
synchronize_rcu()
read B
CPU2:
rcu_read_lock()
B = 0
read A
Are we guaranteed that we won't get both of them seeing ones, in situation
when that rcu_read_lock() comes too late to be noticed by synchronize_rcu()?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists