[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0dfa7fc6-3a15-4adc-ad1d-81bb43f62919@themaw.net>
Date: Thu, 13 Nov 2025 08:14:36 +0800
From: Ian Kent <raven@...maw.net>
To: Christian Brauner <brauner@...nel.org>
Cc: Al Viro <viro@...iv.linux.org.uk>,
Kernel Mailing List <linux-kernel@...r.kernel.org>,
autofs mailing list <autofs@...r.kernel.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>
Subject: Re: [PATCH 2/2] autofs: dont trigger mount if it cant succeed
On 12/11/25 19:01, Christian Brauner wrote:
> On Tue, Nov 11, 2025 at 08:27:42PM +0800, Ian Kent wrote:
>> On 11/11/25 18:55, Christian Brauner wrote:
>>> On Tue, Nov 11, 2025 at 10:24:35AM +0000, Al Viro wrote:
>>>> On Tue, Nov 11, 2025 at 11:19:59AM +0100, Christian Brauner wrote:
>>>>
>>>>>> + sbi->owner = current->nsproxy->mnt_ns;
>>>>> ns_ref_get()
>>>>> Can be called directly on the mount namespace.
>>>> ... and would leak all mounts in the mount tree, unless I'm missing
>>>> something subtle.
>>> Right, I thought you actually wanted to pin it.
>>> Anyway, you could take a passive reference but I think that's nonsense
>>> as well. The following should do it:
>> Right, I'll need to think about this for a little while, I did think
>>
>> of using an id for the comparison but I diverged down the wrong path so
>>
>> this is a very welcome suggestion. There's still the handling of where
>>
>> the daemon goes away (crash or SIGKILL, yes people deliberately do this
>>
>> at times, think simulated disaster recovery) which I've missed in this
> Can you describe the problem in more detail and I'm happy to help you
> out here. I don't yet understand what the issue is.
I thought the patch description was ok but I'll certainly try.
Consider using automount in a container.
For people to use autofs in a container either automount(8) in the init
mount namespace or an independently running automount(8) entirely within
the container can be used. The later is done by adding a volume option
(or options) to the container to essentially bind mount the autofs mount
into the container and the option syntax allows the volume to be set
propagation slave if it is not already set by default (shared is bad,
the automounts must not propagate back to where they came from). If the
automount(8) instance is entirely within the container that also works
fine as everything is isolated within the container (no volume options
are needed).
Now with unshare(1) (and there are other problematic cases, I think systemd
private temp gets caught here too) where using something like "unshare -Urm"
will create a mount namespace that includes any autofs mounts and sets them
propagation private. These mounts cannot be unmounted within the mount
namepsace by the namespace creator and accessing a directory within the
autofs mount will trigger a callback to automount(8) in the init namespace
which mounts the requested mount. But the newly created mount namespace is
propagation private so the process in the new mount namespace loops around
sending mount requests that cannot be satisfied. The odd thing is that
on the
second callback to automount(8) returns an error which does complete the
->d_automount() call but doesn't seem to result in breaking the loop in
__traverse_mounts() for some unknown reason. One way to resolve this is to
check if the mount can be satisfied and if not bail out immediately and
returning an error in this case does work.
I was tempted to work out how to not include the autofs mounts in the cloned
namespace but that's file system specific code in the VFS which is not
ok and
it (should) also be possible for the namespace creator to "mount
--make-shared"
in the case the creator wants the mount to function and this would
prevent that.
So I don't think this is the right thing to do.
There's also the inability of the mount namespace creator to umount the
autofs
mount which could also resolve the problem which I haven't looked into yet.
Have I made sense?
Clearly there's nothing on autofs itself and why one would want to use it
but I don't think that matters for the description.
Ian
Powered by blists - more mailing lists