[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <32789c5c-9223-4345-88e0-d806bd1b0b31@gmail.com>
Date: Mon, 15 Dec 2025 09:55:21 +0200
From: "Nikola Z. Ivanov" <zlatistiv@...il.com>
To: aivazian.tigran@...il.com
Cc: linux-kernel@...r.kernel.org,
syzbot+f51a2a34984e4d8888fd@...kaller.appspotmail.com
Subject: Re: [PATCH] bfs: Fix calculation of offset when moving blocks
On 11/24/25 7:28 PM, Nikola Z. Ivanov wrote:
> bfs_move_blocks performs an incorrect calculation when
> looping over the blocks that are to be moved.
>
> When looping over the section of blocks:
>
> for (i = start; i <= end; i++)
> if(bfs_move_block(i, where + i, sb)) {
>
> Here bfs_move_block will not move
> blocks "start"-"end" to "where"-"where+size"
> but instead it will add "i" (which is initialized to "start")
> to "where", adding additional unintended offset.
>
> Example reproducer of the bug:
>
> dd if=/dev/zero of=./bfs.img count=200 bs=1M
> mkfs.bfs ./bfs.img
> losetup /dev/loop0 ./bfs.img
> mount /dev/loop0 /mnt
> dd if=/dev/urandom of=/mnt/file0 count=75 bs=1M
> dd if=/dev/urandom of=/mnt/file1 count=25 bs=1M
> dd if=/dev/urandom of=/mnt/file2 count=25 bs=1M
> echo 123 >> /mnt/file1
>
> Essentially this will create 3 files, file1 located
> right between file0 and file2 on-disk. Writing more
> data to file1 will cause bfs to attempt to move it
> after file2, where we have enough space to store it,
> but due to the incorrect calculation it will attempt
> to move it past the end of device and trigger a NULL
> pointer dereference in bfs_move_block.
>
> Even if we have enough space to account for the
> unintended offset, the content of file1 will now be
> garbage due to bfs_get_block expecting file1 to be
> right after file2.
>
> For example:
>
> dd if=/dev/zero of=./bfs.img count=200 bs=1M
> mkfs.bfs ./bfs.img
> losetup /dev/loop0 ./bfs.img
> mount /dev/loop0 /mnt
> dd if=/dev/urandom of=/mnt/file0 count=1 bs=512
> dd if=/dev/urandom of=/mnt/file1 count=1 bs=512
> dd if=/dev/urandom of=/mnt/file2 count=1 bs=512
> sync; echo 1 > /proc/sys/vm/drop_caches
> xxd /mnt/file1
> echo 123 >> /mnt/file1
> sync; echo 1 > /proc/sys/vm/drop_caches
> xxd /mnt/file1
>
> The hexdump reveals that our file now contains zeroes,
> except for the "123\n" we echoed at the end.
>
> Fix this by correcting the calculation. Additionally add
> a check for new == NULL inside bfs_move_block and propagate
> the return value of bfs_move_block as a return value of
> bfs_move_blocks in case of error.
>
> Reported-by: syzbot+f51a2a34984e4d8888fd@...kaller.appspotmail.com
> Closes: https://syzkaller.appspot.com/bug?extid=f51a2a34984e4d8888fd
> Signed-off-by: Nikola Z. Ivanov <zlatistiv@...il.com>
> ---
> The example reproducers might trigger a WARNING() in mark_buffer_dirty,
> which is observed often when blocks are moved around,
> syzbot has already hit that bug here:
> https://syzkaller.appspot.com/bug?extid=2327bccb02eef9291c1c
> I will follow up with a patch for that soon.
>
> fs/bfs/file.c | 17 ++++++++++++-----
> 1 file changed, 12 insertions(+), 5 deletions(-)
>
> diff --git a/fs/bfs/file.c b/fs/bfs/file.c
> index d33d6bde992b..649105018014 100644
> --- a/fs/bfs/file.c
> +++ b/fs/bfs/file.c
> @@ -40,6 +40,10 @@ static int bfs_move_block(unsigned long from, unsigned long to,
> if (!bh)
> return -EIO;
> new = sb_getblk(sb, to);
> + if (!new) {
> + brelse(bh);
> + return -EIO;
> + }
> memcpy(new->b_data, bh->b_data, bh->b_size);
> mark_buffer_dirty(new);
> bforget(bh);
> @@ -51,14 +55,17 @@ static int bfs_move_blocks(struct super_block *sb, unsigned long start,
> unsigned long end, unsigned long where)
> {
> unsigned long i;
> + int err;
>
> dprintf("%08lx-%08lx->%08lx\n", start, end, where);
> - for (i = start; i <= end; i++)
> - if(bfs_move_block(i, where + i, sb)) {
> - dprintf("failed to move block %08lx -> %08lx\n", i,
> - where + i);
> - return -EIO;
> + for (i = 0; i <= end - start; i++) {
> + err = bfs_move_block(start + i, where + i, sb);
> + if (err) {
> + dprintf("failed to move block %08lx -> %08lx\n",
> + start + i, where + i);
> + return err;
> }
> + }
> return 0;
> }
>
Hi Tigran,
I hope I'm not being too persuasive by sending this reminder,
but did you have a chance to look at this patch?
Thank you!
Powered by blists - more mailing lists