lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2856820a46a6e47206eb51a7f66ec51a7ef0bd06.camel@redhat.com>
Date:   Mon, 16 Jan 2023 16:27:31 +0100
From:   Alexander Larsson <alexl@...hat.com>
To:     Gao Xiang <hsiangkao@...ux.alibaba.com>,
        linux-fsdevel@...r.kernel.org
Cc:     linux-kernel@...r.kernel.org, gscrivan@...hat.com
Subject: Re: [PATCH v2 0/6] Composefs: an opportunistically sharing verified
 image filesystem

On Mon, 2023-01-16 at 21:26 +0800, Gao Xiang wrote:
> 
> 
> On 2023/1/16 20:33, Alexander Larsson wrote:
> > 
> > 
> > Suppose you have:
> > 
> > -rw-r--r-- root root image.ext4
> > -rw-r--r-- root root image.composefs
> > drwxr--r-- root root objects/
> > -rw-r--r-- root root objects/backing.file
> > 
> > Are you saying it is easier for someone to modify backing.file than
> > image.ext4?
> > 
> > I argue it is not, but composefs takes some steps to avoid issues
> > here.
> > At mount time, when the basedir ("objects/" above) argument is
> > parsed,
> > we resolve that path and then create a private vfsmount for it:
> > 
> >   resolve_basedir(path) {
> >          ...
> >         mnt = clone_private_mount(&path);
> >          ...
> >   }
> > 
> >   fsi->bases[i] = resolve_basedir(path);
> > 
> > Then we open backing files with this mount as root:
> > 
> >   real_file = file_open_root_mnt(fsi->bases[i], real_path,
> >                                 file->f_flags, 0);
> > 
> > This will never resolve outside the initially specified basedir,
> > even
> > with symlinks or whatever. It will also not be affected by later
> > mount
> > changes in the original mount namespace, as this is a private
> > mount.
> > 
> > This is the same mechanism that overlayfs uses for its upper dirs.
> 
> Ok.  I have no problem of this part.
> 
> > 
> > I would argue that anyone who has rights to modify the contents of
> > files in "objects" (supposing they were created with sane
> > permissions)
> > would also have rights to modify "image.ext4".
> 
> But you don't have any permission check for files in such
> "objects/" directory in composefs source code, do you?

I don't see how permission checks would make any difference to the
ability to modify the image by anyone? Do you mean the kernel should
validate the basedir so that is has sane permissions rather than
trusting the user? That seems weird to me.

Or do you mean that someone would create a composefs image that
references a file they could not otherwise read, and then use it as a
basedir in a composefs mount to read the file? Such a mount can only
happen if you are root, and it can only read files inside that
particular directory. However, maybe we should use the callers
credentials to ensure that they are allowed to read the backing file,
just in case. That can't hurt.

> As I said in my original reply, don't assume random users or
> malicious people just passing in or behaving like your expected
> way.  Sometimes they're not but I think in-kernel fses should
> handle such cases by design.  Obviously, any system written by
> human can cause unexpected bugs, but that is another story.
> I think in general it needs to have such design at least.

You need to be root to mount a fs, an operation which is generally
unsafe (because few filesystems are completely resistant to hostile
filesystem data). Therefore I think we can expect a certain amount of
sanity in its use, such as "don't pass in directories that are world
writable".

> > 
> > > That is also why we selected fscache at the first time to manage
> > > all
> > > local cache data for EROFS, since such content-defined directory
> > > is
> > > quite under control by in-kernel fscache instead of selecting a
> > > random directory created and given by some userspace program.
> > > 
> > > If you are interested in looking info the current in-kernel
> > > fscache
> > > behavior, I think that is much similar as what ostree does now.
> > > 
> > > It just needs new features like
> > >     - multiple directories;
> > >     - daemonless
> > > to match.
> > > 
> > 
> > Obviously everything can be extended to support everything. But
> > composefs is very small and simple (2128 lines of code), while at
> > the
> > same time being easy to use (just mount it with one syscall) and
> > needs
> > no complex userspace machinery and configuration. But even without
> > the
> > above feature additions fscache + cachefiles is 7982 lines, plus
> > erofs
> > is 9075 lines, and then on top of that you need userspace
> > integration
> > to even use the thing.
> 
> I've replied this in the comment of LWN.net.  EROFS can handle both
> device-based or file-based images. It can handle FSDAX, compression,
> data deduplication, rolling-hash finer compressed data duplication,
> etc.  Of course, for your use cases, you can just turn them off by
> Kconfig, I think such code is useless to your use cases as well.
>
> And as a team work these years, EROFS always accept useful features
> from other people.  And I've been always working on cleaning up
> EROFS, but as long as it gains more features, the code can expand
> of course.
> 
> Also take your project -- flatpak for example, I don't think the
> total line of current version is as same as the original version.
> 
> Also you will always maintain Composefs source code below 2.5k Loc?
> 
> > 
> > Don't take me wrong, EROFS is great for its usecases, but I don't
> > really think it is the right choice for my usecase.
> > 
> > > > > 
> > > > Secondly, the use of fs-cache doesn't stack, as there can only
> > > > be
> > > > one
> > > > cachefs agent. For example, mixing an ostree EROFS boot with a
> > > > container backend using EROFS isn't possible (at least without
> > > > deep
> > > > integration between the two userspaces).
> > > 
> > > The reasons above are all current fscache implementation
> > > limitation:
> > > 
> > >    - First, if such overlay model really works, EROFS can do it
> > > without
> > > fscache feature as well to integrate userspace ostree.  But even
> > > that
> > > I hope this new feature can be landed in overlayfs rather than
> > > some
> > > other ways since it has native writable layer so we don't need
> > > another
> > > overlayfs mount at all for writing;
> > 
> > I don't think it is the right approach for overlayfs to integrate
> > something like image support. Merging the two codebases would
> > complicate both while adding costs to users who need only support
> > for
> > one of the features. I think reusing and stacking separate features
> > is
> > a better idea than combining them.
> 
> Why? overlayfs could have metadata support as well, if they'd like
> to support advanced features like partial copy-up without fscache
> support.
> 
> > 
> > > 
> > > > 
> > > > Instead what we have done with composefs is to make filesystem
> > > > image
> > > > generation from the ostree repository 100% reproducible. Then
> > > > we
> > > > can
> > > 
> > > EROFS is all 100% reproduciable as well.
> > > 
> > 
> > 
> > Really, so if I today, on fedora 36 run:
> > # tar xvf oci-image.tar
> > # mkfs.erofs oci-dir/ oci.erofs
> > 
> > And then in 5 years, if someone on debian 13 runs the same, with
> > the
> > same tar file, then both oci.erofs files will have the same sha256
> > checksum?
> 
> Why it doesn't?  Reproducable builds is a MUST for Android use cases
> as well.

That is not quite the same requirements. A reproducible build in the
traditional sense is limited to a particular build configuration. You
define a set of tools for the build, and use the same ones for each
build, and get a fixed output. You don't expect to be able to change
e.g. the compiler and get the same result. Similarly, it is often the
case that different builds or versions of compression libraries gives
different results, so you can't expect to use e.g. a different libz and
get identical images.

> Yes, it may break between versions by mistake, but I think
> reproducable builds is a basic functionalaity for all image
> use cases.
> 
> > 
> > How do you handle things like different versions or builds of
> > compression libraries creating different results? Do you guarantee
> > to
> > not add any new backwards compat changes by default, or change any
> > default options? Do you guarantee that the files are read from
> > "oci-
> > dir" in the same order each time? It doesn't look like it.
> 
> If you'd like to say like that, why mkcomposefs doesn't have the
> same issue that it may be broken by some bug.
> 

libcomposefs defines a normalized form for everything like file order,
xattr orders, etc, and carefully normalizes everything such that we can
guarantee these properties. It is possible that some detail was missed,
because we're humans. But it was a very conscious and deliberate design
choice that is deeply encoded in the code and format. For example, this
is why we don't use compression but try to minimize size in other ways.

> > > 
> > > But really, personally I think the issue above is different from
> > > loopback devices and may need to be resolved first. And if
> > > possible,
> > > I hope it could be an new overlayfs feature for everyone.
> > 
> > Yeah. Independent of composefs, I think EROFS would be better if
> > you
> > could just point it to a chunk directory at mount time rather than
> > having to route everything through a system-wide global cachefs
> > singleton. I understand that cachefs does help with the on-demand
> > download aspect, but when you don't need that it is just in the
> > way.
> 
> Just check your reply to Dave's review, it seems that how
> composefs dir on-disk format works is also much similar to
> EROFS as well, see:
> 
> https://docs.kernel.org/filesystems/erofs.html -- Directories
> 
> a block vs a chunk = dirent + names
> 
> cfs_dir_lookup -> erofs_namei + find_target_block_classic;
> cfs_dir_lookup_in_chunk -> find_target_dirent.

Yeah, the dirent layout looks very similar. I guess great minds think
alike! My approach was simpler initially, but it kinda converged on
this when I started optimizing the kernel lookup code with binary
search.

> Yes, great projects could be much similar to each other
> occasionally, not to mention opensource projects ;)
> 
> Anyway, I'm not opposed to Composefs if folks really like a
> new read-only filesystem for this. That is almost all I'd like
> to say about Composefs formally, have fun!
> 
> Thanks,
> Gao Xiang

Cool, thanks for the feedback.


-- 
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
=-=-=
 Alexander Larsson                                            Red Hat,
Inc 
       alexl@...hat.com            alexander.larsson@...il.com 
He's a maverick guitar-strumming senator with a passion for fast cars. 
She's an orphaned winged angel with her own daytime radio talk show.
They 
fight crime! 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ