lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20090607224137.e13be7f3.akpm@linux-foundation.org>
Date:	Sun, 7 Jun 2009 22:41:37 -0700
From:	Andrew Morton <akpm@...ux-foundation.org>
To:	Ian Kent <raven@...maw.net>
Cc:	autofs mailing list <autofs@...ux.kernel.org>,
	linux-fsdevel <linux-fsdevel@...r.kernel.org>,
	Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] autofs4 - remove hashed check in validate_wait()

On Mon, 08 Jun 2009 13:23:40 +0800 Ian Kent <raven@...maw.net> wrote:

> Andrew Morton wrote:
> > On Mon, 08 Jun 2009 11:25:37 +0800 Ian Kent <raven@...maw.net> wrote:
> > 
> >> The recent ->lookup() deadlock correction required the directory
> >> inode mutex to be dropped while waiting for expire completion. We
> >> were concerned about side effects from this change and one has
> >> been identified.
> >>
> >> When checking if a mount has already completed prior to adding a
> >> new mount request to the wait queue we check if the dentry is hashed
> >> and, if so, if it is a mount point. But, if a mount successfully
> >> completed while we slept on the wait queue mutex the dentry must
> >> exist for the mount to have completed so the test is not really
> >> needed.
> >>
> >> Mounts can also be done on top of a global root dentry, so for the
> >> above case, where a mount request completes and the wait queue entry
> >> has already been removed, the hashed test returning false can cause
> >> an incorrect callback to the daemon. Also, d_mountpoint() is not
> >> sufficient to check if a mount has completed for the multi-mount
> >> case when we don't have a real mount at the base of the tree.
> >>
> > 
> > I've been scratching my head trying to work out if this is a
> > needed-in-2.6.30 fix, but all I got was a bald spot.  Help?
> 
> Yeah, and why would you want to know that much about autofs, it's a
> wonder I have any hair at all, ;)
> 
> I think so if possible, as it resolves an issue that is a side effect of
> the last patch I sent, which resolved a deadlook in ->lookup(). The
> problem occurs due to dropping the directory inode mutex before waiting
> for an expire. What isn't obvious is that holding the mutex (as we did
> previously) causes processes wanting to request mounts for other
> directories to wait, so we don't see the contention for the mount
> request wait queue that this patch addresses.
> 
> However, the issue only surfaces when there are a number of processes
> all trying to perform mounts at the same time. The test I ran used 10
> processes all using the same map.

<scratch scratch> 
What are the user-visible effects of the thing-which-this-fixed?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ