[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230804-allheilmittel-teleobjektiv-a0351a653d31@brauner>
Date: Fri, 4 Aug 2023 16:36:49 +0200
From: Christian Brauner <brauner@...nel.org>
To: Christoph Hellwig <hch@....de>
Cc: syzbot <syzbot+2faac0423fdc9692822b@...kaller.appspotmail.com>,
jack@...e.cz, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org, syzkaller-bugs@...glegroups.com,
viro@...iv.linux.org.uk
Subject: Re: [syzbot] [fs?] KASAN: slab-use-after-free Read in
test_bdev_super_fc
On Fri, Aug 04, 2023 at 04:02:01PM +0200, Christoph Hellwig wrote:
> On Fri, Aug 04, 2023 at 03:20:45PM +0200, Christian Brauner wrote:
> > On Fri, Aug 04, 2023 at 12:14:08PM +0200, Christoph Hellwig wrote:
> > > FYI, I can reproduce this trivially locally, but even after spending a
> > > significant time with the trace I'm still puzzled at what is going
> > > on. I've started trying to make sense of the lockdep report about
> > > returning to userspace with s_umount held, originall locked in
> > > get_tree_bdev and am still missing how it could happen.
> >
> > So in the old scheme:
> >
> > s = alloc_super()
> > -> down_write_nested(&s->s_umount, SINGLE_DEPTH_NESTING);
> >
> > and assume you're not finding an old one immediately afterwards you'd
> >
> > -> spin_lock(&sb_lock)
> >
> > static int set_bdev_super(struct super_block *s, void *data)
> > {
> > s->s_bdev = data;
> > s->s_dev = s->s_bdev->bd_dev;
> > s->s_bdi = bdi_get(s->s_bdev->bd_disk->bdi);
> >
> > if (bdev_stable_writes(s->s_bdev))
> > s->s_iflags |= SB_I_STABLE_WRITES;
> > return 0;
> > }
> >
> > -> spin_unlock(&sb_lock)
> >
> > in the new scheme you're doing:
> >
> > s = alloc_super()
> > -> down_write_nested(&s->s_umount, SINGLE_DEPTH_NESTING);
> >
> > and assume you're not finding an old one immediately afterwards you'd
> >
> > up_write(&s->s_umount);
> >
> > error = setup_bdev_super(s, fc->sb_flags, fc);
> > -> spin_lock(&sb_lock);
> > sb->s_bdev = bdev;
> > sb->s_bdi = bdi_get(bdev->bd_disk->bdi);
> > if (bdev_stable_writes(bdev))
> > sb->s_iflags |= SB_I_STABLE_WRITES;
> > -> spin_unlock(&sb_lock);
> >
> > down_write(&s->s_umount);
> >
> > Which looks like the lock ordering here is changed?
>
> Yes, that none only should be safe, but more importantly should not
> lead to a return to userspace with s_umount held.
>
> Anyway, debugging a regression in mainline right now so I'm taking a
> break from this one.
FFS
diff --git a/fs/romfs/super.c b/fs/romfs/super.c
index c59b230d55b4..96023fac1ed8 100644
--- a/fs/romfs/super.c
+++ b/fs/romfs/super.c
@@ -590,10 +590,7 @@ static void romfs_kill_sb(struct super_block *sb)
}
#endif
#ifdef CONFIG_ROMFS_ON_BLOCK
- if (sb->s_bdev) {
- kill_block_super(sb);
- return;
- }
+ kill_block_super(sb);
#endif
}
Powered by blists - more mailing lists