[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20251023002154.GQ6170@frogsfrogsfrogs>
Date: Wed, 22 Oct 2025 17:21:54 -0700
From: "Darrick J. Wong" <djwong@...nel.org>
To: Theodore Tso <tytso@....edu>
Cc: Dave Dykstra <dwd@...n.ch>, linux-ext4@...r.kernel.org
Subject: Re: [PATCH] fuse2fs: updates for message reporting journal is not
supported
On Tue, Oct 21, 2025 at 06:36:05PM -0700, Theodore Tso wrote:
> On Tue, Oct 21, 2025 at 05:33:35PM -0500, Dave Dykstra wrote:
> > I understood that, but does the filesystem actually write metadata after
> > the journal is recovered, such that if the fuse2fs process dies without
> > a clean unmount there might be file corruption or data loss? That is,
> > in the case of ro when there is write access, does the warning message
> > really apply?
>
> As an example, if file system inconsistencies is detected by the
> kernel, it will update various fields in the superblock to indicate
> that file system is corrupted, as well as when and where the
> corruption is detected:
>
> __le32 s_error_count; /* number of fs errors */
> __le32 s_first_error_time; /* first time an error happened */
> __le32 s_first_error_ino; /* inode involved in first error */
> __le64 s_first_error_block; /* block involved of first error */
> __u8 s_first_error_func[32] __nonstring; /* function where the error happened */
> __le32 s_first_error_line; /* line number where error happened */
> __le32 s_last_error_time; /* most recent time of an error */
> __le32 s_last_error_ino; /* inode involved in last error */
> __le32 s_last_error_line; /* line number where error happened */
> __le64 s_last_error_block; /* block involved of last error */
> __u8 s_last_error_func[32] __nonstring; /* function where the error happened */
>
> Since this is a singleton 4k update the superblock, we don't really
> need to worry about problems caused by a non-atomic update of this
> metadata. And similarly, with the journal replay, if we get
> interrupted while doing the journal replay, the replay is idempotent,
> so we can just restart the journal replay from scratch.
>
> As far as the warning message, if you mean the warning message printed
> by fuse2fs indicating that it doen't have journal support, and so if
> you are modifying the file system and the system or fuse2fs crashes,
> there may be file system corruption and/or data loss, that only needs
> to be printed when mounting read-write. It should be safe to skip
> printing that warning message if the file system is mounted with -o
> ro, based on the resoning abovce.
/me notes that (as I pointed out elsewhere in the thread) the fuse
server isn't notified if a mount goes from ro -> rw, so fuse2fs really
ought to print the warning unconditionally.
--D
> > > Are you running fstests QA on these patches before you send them out?
> >
> > I had not heard of them, and do not see them documented anywhere in
> > e2fsprogs, so I don't know how I was supposed to have known they were
> > needed. Ideally there would be an automated CI test suite. The patches
> > have passed the github CI checks (which don't show up in the standard
> > pull request place, but I found them in my own repo's Actions tab).
> >
> > Are you talking about the tests at https://github.com/btrfs/fstests?
> > If so, it looks like there are a ton of options. Is there a standard
> > way to run them with fuse2fs?
>
> This is btrfs's local form of https://github.com/btrfs/fstests of
> xfstests (or fstests, as it is now sometimes called). We do have an
> automated way of running them for ext4 kernel code. See [1][2]
>
> [1] https://thunk.org/gce-xfstests
> [2] https://github.com/tytso/xfstests-bld/blob/master/Documentation/kvm-quickstart.md
>
> Darrick has been doing a lot of really good work to run fuse2fs using
> fstests/xfstests. There isn't a turnkey way of running fuse2fs using
> this test suite yet. It's on my todo list to add an easy way to do
> this via kvm-xfstests/gce-xfstests but I'm probably not going to get
> to it until sometime next year. If someone would like to give it a
> try --- patches would be gratefully accepted.
>
> - Ted
>
>
Powered by blists - more mailing lists