lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Tue, 01 Nov 2016 10:02:03 +0800
From:   Ian Kent <raven@...maw.net>
To:     Al Viro <viro@...IV.linux.org.uk>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        autofs mailing list <autofs@...r.kernel.org>,
        Kernel Mailing List <linux-kernel@...r.kernel.org>,
        "Eric W. Biederman" <ebiederm@...ssion.com>,
        linux-fsdevel <linux-fsdevel@...r.kernel.org>,
        Omar Sandoval <osandov@...ndov.com>
Subject: Re: [PATCH 1/8] vfs - change d_manage() to take a struct path

On Thu, 2016-10-27 at 14:50 +0800, Ian Kent wrote:
> On Thu, 2016-10-27 at 10:47 +0800, Ian Kent wrote:
> > 
> > On Thu, 2016-10-27 at 03:11 +0100, Al Viro wrote:
> > > 
> > >  
> > > 
> > > How much testing did it get?  I've several test setups involving
> > > autofs, but they are nowhere near exhaustive and I don't have good
> > > enough feel of the codebase to slap together something with decent
> > > coverage...
> > It got my standard testing.
> > 
> > For that I use a modified version of the autofs Connectathon system.
> > 
> > It's more about testing a wide variety of syntax and map setups and so
> > exercises
> > a large number of different types of autofs mounts.
> > 
> > It's meant to check normal operation but not so much stress testing even
> > though
> > it does perform quite a few mounts (around 250-300, not to mention the
> > autofs
> > mounts themselves).
> > 
> > I have another standard test I call the submount-test and it was originally
> > done
> > to stress test the most common problem I see, concurrent expire to mount.
> > 
> > I didn't see any problems I couldn't explain in these but I might need to
> > re-
> > visit the submount-test to see if it is still doing what I want.
> > 
> > OTOH, the pattern of mount and umount I see when the submount-test is run
> > does
> > look like it is doing what I want but it might not be getting all the way to
> > the
> > top of the tree of mounts enough times over the course of the test.
> > 
> > So I'm happy with my testing, just not as happy as I could be.
> Well, almost happy with my testing.
> 
> Naturally I also tested the specific case this series is meant to fix.
> 
> Basically:
> ls /mnt/foo            # do the initial automount
> unshare -m sleep 10 &  # hold the automount in a new namespace
> umount /mnt/foo        # pretend the mount timed out
> ls /mnt/foo            # try to access it again
> ls: cannot open directory '/mnt/foo': Too many levels of symbolic links
> 
> as seen on the autofs mailing list. My specific test was a little different
> but
> verified this was resolved.
> 
> Now that Al seems reasonably OK with the series, with some changes, I'll test
> some other use cases, mainly to verify the expire still functions as required.
> That might need more work.

I have done some further tests, specifically for (what I believe are) the two
most common use cases.

First, using automount(8) entirely within a container, as expected works fine.

But the second case, one where automount(8) is run in the root namespace and has
automount directories bound into a container does have a problem.

The problem is due to may_umount_tree() only considering mounts in the root
namespace and leads to expire attempts on mounts even if they are in use in
another namespace.

It's not a serious problem as the umount attempt fails because the mount is busy
but it would be good to avoid the call back overhead.

Unfortunately it looks like transforming may_umount_tree() to use a similar
check to may_umount() introduces a race (picked up by my submount-test) which
I'm struggling to understand, I'll continue to work on it.

Ian

Powered by blists - more mailing lists