lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250228125542.GA15240@mit.edu>
Date: Fri, 28 Feb 2025 07:55:42 -0500
From: "Theodore Ts'o" <tytso@....edu>
To: Ethan Carter Edwards <ethan@...ancedwards.com>
Cc: linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
        linux-staging@...ts.linux.dev, asahi@...ts.linux.dev
Subject: Re: [RFC] apfs: thoughts on upstreaming an out-of-tree module

On Thu, Feb 27, 2025 at 08:53:56PM -0500, Ethan Carter Edwards wrote:
> Lately, I have been thinking a lot about the lack of APFS support on
> Linux. I was wondering what I could do about that. APFS support is not 
> in-tree, but there is a proprietary module sold by Paragon software [0].
> Obviously, this could not be used in-tree. However, there is also an 
> open source driver that, from what I can tell, was once planned to be 
> upstreamed [1] with associated filesystem progs [2]. I think I would 
> base most of my work off of the existing FOSS tree.
> 
> The biggest barrier I see currently is the driver's use of bufferheads.
> I realize that there has been a lot of work to move existing filesystem
> implementations to iomap/folios, and adding a filesystem that uses
> bufferheads would be antithetical to the purpose of that effort.
> Additionally, there is a lot of ifndefs/C preprocessor magic littered
> throughout the codebase that fixes functionality with various different
> versions of Linux.

I don't see the use of bufferheads as a fundamental barrier to the
mainline kernel; certainly not for staging.  First of all, there are a
huge number of file systems which still use buffer heads, including:

   adfs affs befs bfs ecryptfs efs exfat ext2 ext4 fat
   freevxfs gfs2 hfs hfsplus hpfs isofs jfs minix nilfs2
   ntfs3 ocfs2 omfs pstore qnx4 qnx6 romfs sysv udf ufs

There are many reasons to move to folios, including better
performance, and making it easier to make sure things are done
correctly if you can take advantage of iomap.

For example, with ext4 we plan to work towards moving to use folios
and iomap for the data plane operations for buffered write (we already
use iomap for Direct I/O, FIEMAP support, etc.) and while we might
want to move away from buffer heads for metadata blocks, we would need
to change the jbd2 layer to use some simplified layer that looks an
awful lot like buffer heads before we could do that.  We might try to
fork buffer heads, and strip out everything we don't need, and then
merge that with jbd2's journal_head structure, for example.  But
that's a mid-term to long-term project, because using bufferheads
doesn't actually hurt anyone.  (That being said, if anyone wants to
help out with the project of allowing us to move jbd2 away from buffer
heads, let me know --- patches are welcome.)

In any case, cleaning up preprocessor magic and other thigns that were
needed because the code was designed for out of tree use would be
something that I'd encourage you to focus on first, and then try a
proposal to submit it to staging.

Cheers,

					- Ted

P.S.  Something that you might want to consider using is fstests (AKA
xfstests), which is the gold standard for file system testing in
Linux.  I have a test appliance VM of xfstests, which you can find
here[1].  I test x86 and arm64 kernels using Google Cloud, and on
local systems, using qemu/kvm.  For qemu/kvm testing, this is being
used on Debian, Fedora, OpenSuSE, and MacOS.

[1] https://github.com/tytso/xfstests-bld

For kernel development on my Macbook Air M2, I can build arm64 kernels
using Debian running in a Parallels VM, and then to avoid the double
virtualization overhead, I run qemu on MacOS using the hvf
accelerator.  It shouldn't be hard to make this work on your Ashai
Linux development system; see more on that below.

For more details about this test infrastructure, including its use on
Google Cloud see the presentation here[2].  I am using gce-xfstests to
perform continuous integration testing by watching a git branch, and
when it gets updated, the test manager (running in an e2-micro VM)
automatically starts a 4 CPU VM to build the kernel, and then launches
multiple 2 CPU VM's to test multiple file system configurations in
parallel --- for example, I am currently running over two dozen fs
kernels testing ext4, xfs, btrfs, and f2fs on a Linux-next branch
every day).  Running a smoke test costs pennies.  A full-up test of a
dozen ext4 configuration (a dozen VM's, running for 2 hours of wall
clock time), costs under $2 at US retail prices.  For APFS, if you
start with a single configuration, with many of the tests disable
because APFS won't many of the advanced file systems of ext4 and xfs,
I'm guessing it will cost you less than 25 cents per test run.

[2] https://thunk.org/gce-xfstests

Or of course you can use qemu-xfstests or kvm-xfstests using local
compute.  I do love though being able to fire off a set of tests, then
suspend my laptop, knowing that I will receive e-mail with the test
results when they are ready.

If you are interested in trying to use this on Asahi linux, I'd
certainly be happy help you with it.  I suspect modulo some
instructures about which packages are needed, it shouldn't be that
hard to run a test appliance.  Building new versions of the appliance
does require a Debian build chroot, which might be tricker to set up
on Asahi, but that's not necessary while you are getting started.

In any case, I strongly encourage file system developers to use
xfstests earlier rather than later.  See the last slide of [2] for my
opinion of "File system development without testing".  :-)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ