[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <299926848.3375545.1764534866882@kpc.webmail.kpnmail.nl>
Date: Sun, 30 Nov 2025 21:34:26 +0100 (CET)
From: Jori Koolstra <jkoolstra@...all.nl>
To: Viacheslav Dubeyko <Slava.Dubeyko@....com>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Cc: "glaubitz@...sik.fu-berlin.de" <glaubitz@...sik.fu-berlin.de>,
"frank.li@...o.com" <frank.li@...o.com>,
"slava@...eyko.com" <slava@...eyko.com>,
"skhan@...uxfoundation.org" <skhan@...uxfoundation.org>,
"syzbot+17cc9bb6d8d69b4139f0@...kaller.appspotmail.com"
<syzbot+17cc9bb6d8d69b4139f0@...kaller.appspotmail.com>
Subject: Re: [PATCH v2] hfs: replace BUG_ONs with error handling
Hi Viachslav,
Thanks for your time to write such a detailed answer. Your comments are very useful
to someone like me starting out in the linux kernel. I really appreciate it.
> > @@ -264,9 +264,9 @@ static int hfs_remove(struct inode *dir, struct dentry *dentry)
> > return res;
> > clear_nlink(inode);
> > inode_set_ctime_current(inode);
> > - hfs_delete_inode(inode);
> > + res = hfs_delete_inode(inode);
> > mark_inode_dirty(inode);
> > - return 0;
> > + return res;
>
> This modification doesn't look good, frankly speaking. The hfs_delete_inode()
> will return error code pretty at the beginning of execution. So, it doesn't make
> sense to call mark_inode_dirty() then. However, we already did a lot of activity
> before hfs_delete_inode() call:
>
> static int hfs_remove(struct inode *dir, struct dentry *dentry)
> {
> struct inode *inode = d_inode(dentry);
> int res;
>
> if (S_ISDIR(inode->i_mode) && inode->i_size != 2)
> return -ENOTEMPTY;
> res = hfs_cat_delete(inode->i_ino, dir, &dentry->d_name);
> if (res)
> return res;
> clear_nlink(inode);
> inode_set_ctime_current(inode);
> hfs_delete_inode(inode);
> mark_inode_dirty(inode);
> return 0;
> }
>
> So, not full executing of hfs_delete_inode() makes situation really bad.
> Because, we deleted record from Catalog File but rejected of execution of
> hfs_delete_inode() functionality.
>
> I am thinking that, maybe, better course of action is to check HFS_SB(sb)-
> >folder_count and HFS_SB(sb)->file_count at the beginning of hfs_remove():
>
> static int hfs_remove(struct inode *dir, struct dentry *dentry)
> {
> struct inode *inode = d_inode(dentry);
> int res;
>
> if (S_ISDIR(inode->i_mode) && inode->i_size != 2)
> return -ENOTEMPTY;
>
> <<-- Check it here and return error
>
> res = hfs_cat_delete(inode->i_ino, dir, &dentry->d_name);
> if (res)
> return res;
> clear_nlink(inode);
> inode_set_ctime_current(inode);
> hfs_delete_inode(inode);
> mark_inode_dirty(inode);
> return 0;
> }
>
That sounds good. But maybe we should do the check even before the ENOTEMPTY check,
as corruption detection is perhaps more interesting than informing that the operation
cannot complete because of some other (less acute) reason.
> In such case, we reject to make the removal, to return error and no activity
> will happened. Let's move the check from hfs_delete_inode() to hfs_remove(). We
> can ignore hfs_create() [1] and hfs_mkdir() [2] because these methods simply
> processing erroneous situation.
>
One thing we can also do is what happens in ext4. We introduce an errors= mount option
which can be set to readonly, panic, or continue depending on the desired behavior in
case of serious error (like corruption). I already implemented this for minix fs, and
the patch was fine. However, the people responsible for minix felt that it was more
sensible to deprecate minix and write a FUSE driver for it. [1]
> > +#define EFSCORRUPTED EUCLEAN /* Filesystem is corrupted */
>
> I don't think that rename existing error code is good idea. Especially, because
> we will not need the newly introduce error code's name. Please, see my comments
> below.
>
For context, I took this from ext4.
> > --- a/fs/hfs/inode.c
> > +++ b/fs/hfs/inode.c
> > @@ -186,16 +186,22 @@ struct inode *hfs_new_inode(struct inode *dir, const struct qstr *name, umode_t
> > s64 next_id;
> > s64 file_count;
> > s64 folder_count;
> > + int err = -ENOMEM;
> >
> > if (!inode)
> > - return NULL;
> > + goto out_err;
> > +
> > + err = -EFSCORRUPTED;
>
> In 99% of cases, this logic will be called for file system internal logic when
> mount was successful. So, file system volume is not corrupted. Even if we
> suspect that volume is corrupted, then potential reason could be failed read (-
> EIO). It needs to run FSCK tool to be sure that volume is really corrupted.
>
I get your point, maybe just warn for possible corruption?
> >
> > mutex_init(&HFS_I(inode)->extents_lock);
> > INIT_LIST_HEAD(&HFS_I(inode)->open_dir_list);
> > spin_lock_init(&HFS_I(inode)->open_dir_lock);
> > hfs_cat_build_key(sb, (btree_key *)&HFS_I(inode)->cat_key, dir->i_ino, name);
> > next_id = atomic64_inc_return(&HFS_SB(sb)->next_id);
> > - BUG_ON(next_id > U32_MAX);
> > + if (next_id > U32_MAX) {
> > + pr_err("next CNID exceeds limit — filesystem corrupted. It is recommended to run fsck\n");
>
> File system volume is not corrupted here. Because, it is only error of file
> system logic. And we will not store this not correct number to the volume,
> anyway. At minimum, we should protect the logic from doing this. And it doesn't
> need to recommend to run FSCK tool here.
What if e.g. next_id is not U32_MAX, but some other slightly smaller value, that's still
not possible, correct? And then we find out not at mount time (at least not right now).
Maybe we should just check at mount time and when the mdb is written if the values like
file/folder_count and next_id make any sense. I think they indicate corruption even for
much smaller values than U32_MAX, but I could not really distill that.
If we have this, then the other BUG_ONs should not indicate corruption but implementation
logic issues. Correct?
>
> Probably, it makes sense to decrement erroneous back.
>
> Potentially, if we have such situation, maybe, it makes sense to consider to
> make file system READ-ONLY. But I am not fully sure.
>
See my comment above.
Thanks,
Jori.
[1] https://lkml.org/lkml/2025/10/28/1786
Powered by blists - more mailing lists