lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200423161717.GB12201@mail.hallyn.com>
Date:   Thu, 23 Apr 2020 11:17:17 -0500
From:   "Serge E. Hallyn" <serge@...lyn.com>
To:     Christian Brauner <christian.brauner@...ntu.com>
Cc:     "Serge E. Hallyn" <serge@...lyn.com>, Jens Axboe <axboe@...nel.dk>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        linux-kernel@...r.kernel.org, linux-block@...r.kernel.org,
        linux-api@...r.kernel.org, Jonathan Corbet <corbet@....net>,
        "Rafael J. Wysocki" <rafael@...nel.org>, Tejun Heo <tj@...nel.org>,
        "David S. Miller" <davem@...emloft.net>,
        Saravana Kannan <saravanak@...gle.com>,
        Jan Kara <jack@...e.cz>, David Howells <dhowells@...hat.com>,
        Seth Forshee <seth.forshee@...onical.com>,
        David Rheinsberg <david.rheinsberg@...il.com>,
        Tom Gundersen <teg@...m.no>,
        Christian Kellner <ckellner@...hat.com>,
        Dmitry Vyukov <dvyukov@...gle.com>,
        Stéphane Graber <stgraber@...ntu.com>,
        linux-doc@...r.kernel.org, netdev@...r.kernel.org,
        Steve Barber <smbarber@...gle.com>,
        Dylan Reid <dgreid@...gle.com>,
        Filipe Brandenburger <filbranden@...il.com>,
        Kees Cook <keescook@...omium.org>,
        Benjamin Elder <bentheelder@...gle.com>,
        Akihiro Suda <suda.kyoto@...il.com>
Subject: Re: [PATCH v2 2/7] loopfs: implement loopfs

On Thu, Apr 23, 2020 at 01:24:01PM +0200, Christian Brauner wrote:
> On Wed, Apr 22, 2020 at 04:52:13PM -0500, Serge Hallyn wrote:
> > On Wed, Apr 22, 2020 at 04:54:32PM +0200, Christian Brauner wrote:
> > > This implements loopfs, a loop device filesystem. It takes inspiration
> > > from the binderfs filesystem I implemented about two years ago and with
> > > which we had overall good experiences so far. Parts of it are also
> > > based on [3] but it's mostly a new, imho cleaner approach.
> > > 
> > > Loopfs allows to create private loop devices instances to applications
> > > for various use-cases. It covers the use-case that was expressed on-list
> > > and in-person to get programmatic access to private loop devices for
> > > image building in sandboxes. An illustration for this is provided in
> > > [4].
> > > 
> > > Also loopfs is intended to provide loop devices to privileged and
> > > unprivileged containers which has been a frequent request from various
> > > major tools (Chromium, Kubernetes, LXD, Moby/Docker, systemd). I'm
> > > providing a non-exhaustive list of issues and requests (cf. [5]) around
> > > this feature mainly to illustrate that I'm not making the use-cases up.
> > > Currently none of this can be done safely since handing a loop device
> > > from the host into a container means that the container can see anything
> > > that the host is doing with that loop device and what other containers
> > > are doing with that device too. And (bind-)mounting devtmpfs inside of
> > > containers is not secure at all so also not an option (though sometimes
> > > done out of despair apparently).
> > > 
> > > The workloads people run in containers are supposed to be indiscernible
> > > from workloads run on the host and the tools inside of the container are
> > > supposed to not be required to be aware that they are running inside a
> > > container apart from containerization tools themselves. This is
> > > especially true when running older distros in containers that did exist
> > > before containers were as ubiquitous as they are today. With loopfs user
> > > can call mount -o loop and in a correctly setup container things work
> > > the same way they would on the host. The filesystem representation
> > > allows us to do this in a very simple way. At container setup, a
> > > container manager can mount a private instance of loopfs somehwere, e.g.
> > > at /dev/loopfs and then bind-mount or symlink /dev/loopfs/loop-control
> > > to /dev/loop-control, pre allocate and symlink the number of standard
> > > devices into their standard location and have a service file or rules in
> > > place that symlink additionally allocated loop devices through losetup
> > > into place as well.
> > > With the new syscall interception logic this is also possible for
> > > unprivileged containers. In these cases when a user calls mount -o loop
> > > <image> <mountpoint> it will be possible to completely setup the loop
> > > device in the container. The final mount syscall is handled through
> > > syscall interception which we already implemented and released in
> > > earlier kernels (see [1] and [2]) and is actively used in production
> > > workloads. The mount is often rewritten to a fuse binary to provide safe
> > > access for unprivileged containers.
> > > 
> > > Loopfs also allows the creation of hidden/detached dynamic loop devices
> > > and associated mounts which also was a often issued request. With the
> > > old mount api this can be achieved by creating a temporary loopfs and
> > > stashing a file descriptor to the mount point and the loop-control
> > > device and immediately unmounting the loopfs instance.  With the new
> > > mount api a detached mount can be created directly (i.e. a mount not
> > > visible anywhere in the filesystem). New loop devices can then be
> > > allocated and configured. They can be mounted through
> > > /proc/self/<fd>/<nr> with the old mount api or by using the fd directly
> > > with the new mount api. Combined with a mount namespace this allows for
> > > fully auto-cleaned up loop devices on program crash. This ties back to
> > > various use-cases and is illustrated in [4].
> > > 
> > > The filesystem representation requires the standard boilerplate
> > > filesystem code we know from other tiny filesystems. And all of
> > > the loopfs code is hidden under a config option that defaults to false.
> > > This specifically means, that none of the code even exists when users do
> > > not have any use-case for loopfs.
> > > In addition, the loopfs code does not alter how loop devices behave at
> > > all, i.e. there are no changes to any existing workloads and I've taken
> > > care to ifdef all loopfs specific things out.
> > > 
> > > Each loopfs mount is a separate instance. As such loop devices created
> > > in one instance are independent of loop devices created in another
> > > instance. This specifically entails that loop devices are only visible
> > > in the loopfs instance they belong to.
> > > 
> > > The number of loop devices available in loopfs instances are
> > > hierarchically limited through /proc/sys/user/max_loop_devices via the
> > > ucount infrastructure (Thanks to David Rheinsberg for pointing out that
> > > missing piece.). An administrator could e.g. set
> > > echo 3 > /proc/sys/user/max_loop_devices at which point any loopfs
> > > instance mounted by uid x can only create 3 loop devices no matter how
> > > many loopfs instances they mount. This limit applies hierarchically to
> > > all user namespaces.
> > 
> > Hm, info->device_count is per loopfs mount, though, right?  I don't
> > see where this gets incremented for all of a user's loopfs mounts
> > when one adds a loopdev?
> > 
> > I'm sure I'm missing something obvious...
> 
> Hm, I think you might be mixing up the two limits? device_count
> corresponds to the "max" mount option and is not involved in enforcing
> hierarchical limits. The global restriction is enforced through
> inc_ucount() which tracks by the uid of the mounter of the superblock.
> If the same user mounts multiple loopfs instances in the same namespace
> the ucount infra will enforce his quota across all loopfs instances.

Well I'm trying to understand what the point of the max mount option
is :)  I can just do N mounts to get N*max mounts to work around it?
But meanwhile if I have a daemon mounting isos over loopdevs to extract
some files (bc I never heard of bsdtar :), I risk more spurious failures
due to hitting max?

If you think we need it, that's fine - it just has the odor of something
more trouble than it's worth.

Anyway, with or without it,

Reviewed-by: Serge Hallyn <serge@...lyn.com>

thanks,
-serge

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ