[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <8760ihjzjs.fsf@dmlp.sw.ru>
Date: Thu, 06 Apr 2017 19:01:11 +0300
From: Dmitry Monakhov <dmonakhov@...nvz.org>
To: Christoph Hellwig <hch@...radead.org>
Cc: linux-kernel@...r.kernel.org, darrick.wong@...cle.com,
axboe@...nel.dk, tytso@....edu, jack@...e.cz, hch@...radead.org
Subject: Re: [PATCH 1/5] bh: Prevent panic on invalid BHs
Christoph Hellwig <hch@...radead.org> writes:
> This look ok, but how did you manage to trigger this case?
# testcases
# TEST1
# Via bug in fallocate
truncate -l 1G img
losetup /dev/loop img
mkfs.ext4 -qF /dev/loop0
mkdir m
mount /dev/loop0 m
# command above truncate bdevs pagecache
xfs_io -c "falloc -k 0 32G" -d /dev/loop0
for ((i=0;i<100;i++));do
xfs_io -c "pwrite 0 4k" -d m/test-$i;
done
sync
# TEST2: NBD close_sock -> kill_bdev
mkdir -p a/mnt
cd a
truncate -s 1G img
mkfs.ext4 -qF img
qemu-nbd -c /dev/nbd0 img
mount /dev/nbd0 /mnt
cp -r /bin/ /mnt&
# Disconnect nbd while cp is active
qemu-nbd -d /dev/nbd0
sync
> I think
> we might have a deeper problem here.
Probably. It seems that !buffer_locked(bh) case should stay BUG_ON
because it is hard to make semi-correct decesion here.
Powered by blists - more mailing lists