lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 21 Jul 2021 15:34:13 +0200
From:   David Sterba <dsterba@...e.cz>
To:     Desmond Cheong Zhi Xi <desmondcheongzx@...il.com>
Cc:     Nikolay Borisov <nborisov@...e.com>, clm@...com,
        josef@...icpanda.com, dsterba@...e.com, anand.jain@...cle.com,
        linux-btrfs@...r.kernel.org, linux-kernel@...r.kernel.org,
        skhan@...uxfoundation.org, gregkh@...uxfoundation.org,
        linux-kernel-mentees@...ts.linuxfoundation.org,
        syzbot+a70e2ad0879f160b9217@...kaller.appspotmail.com
Subject: Re: [PATCH] btrfs: fix rw device counting in
 __btrfs_free_extra_devids

On Thu, Jul 15, 2021 at 09:11:43PM +0800, Desmond Cheong Zhi Xi wrote:
> On 15/7/21 7:55 pm, Nikolay Borisov wrote:
> > 
> > 
> > On 15.07.21 г. 13:34, Desmond Cheong Zhi Xi wrote:
> >> Syzbot reports a warning in close_fs_devices that happens because
> >> fs_devices->rw_devices is not 0 after calling btrfs_close_one_device
> >> on each device.
> >>
> >> This happens when a writeable device is removed in
> >> __btrfs_free_extra_devids, but the rw device count is not decremented
> >> accordingly. So when close_fs_devices is called, the removed device is
> >> still counted and we get an off by 1 error.
> >>
> >> Here is one call trace that was observed:
> >>    btrfs_mount_root():
> >>      btrfs_scan_one_device():
> >>        device_list_add();   <---------------- device added
> >>      btrfs_open_devices():
> >>        open_fs_devices():
> >>          btrfs_open_one_device();   <-------- rw device count ++
> >>      btrfs_fill_super():
> >>        open_ctree():
> >>          btrfs_free_extra_devids():
> >> 	  __btrfs_free_extra_devids();  <--- device removed
> >> 	  fail_tree_roots:
> >> 	    btrfs_close_devices():
> >> 	      close_fs_devices();   <------- rw device count off by 1
> >>
> >> Fixes: cf89af146b7e ("btrfs: dev-replace: fail mount if we don't have replace item with target device")
> >> Reported-by: syzbot+a70e2ad0879f160b9217@...kaller.appspotmail.com
> >> Tested-by: syzbot+a70e2ad0879f160b9217@...kaller.appspotmail.com
> >> Signed-off-by: Desmond Cheong Zhi Xi <desmondcheongzx@...il.com>
> > 
> > Is there a reliable reproducer from syzbot? Can this be turned into an
> > xfstest?
> > 
> 
> Syzbot has some reliable reproducers here:
> https://syzkaller.appspot.com/bug?id=113d9a01cbe0af3e291633ba7a7a3e983b86c3c0
> 
> Seems like it constructs two images in-memory then mounts them. I'm not 
> sure if that's amenable to be converted into an xfstest?

It would need to be an image from the time the warning is reproduced,
I'm not sure how much timing is also important. But iirc adding raw test
images to fstests was not welcome, so it would have to be a reproducer
and given that the syzkaller source is not human readable I'm not sure
it would be welcome either.

Maybe there's some middle ground when the image is created by mkfs and
filled with the data and then the mount loop is started from shell. But
that means to untangle the C reproducer.

Powered by blists - more mailing lists