[<prev] [next>] [day] [month] [year] [list]
Message-ID: <CANP3RGc3J+-SPgSbQKM12PhupyheKKeM2dcS_JqmVvby+W_XXQ@mail.gmail.com>
Date: Sun, 28 Jan 2024 22:45:50 -0800
From: Maciej Żenczykowski <zenczykowski@...il.com>
To: "Ted Ts'o" <tytso@...gle.com>, Kernel hackers <linux-kernel@...r.kernel.org>
Subject: BLKDISCARD... off by -4kiB @ >=4GiB?
On a personal machine with 6.6.13-200.fc39.x86_64 and a 2280 pcie nvme
SSD: Samsung MZ-VKW1T00 SSD
blkdiscard -v /dev/nvme0n1
seems to issue (per strace):
ioctl(3, BLKDISCARD, [0, 1024209543168])
but looking at the resulting block device (xxd -a < /dev/nvme0n1)...
it looks like exactly the last 4kiB of every 4GiB is not zero'ed.
Note: this is in spite of running it more than once.
(lsblk -D claims DISC-ALN 0, DISC-GRAN 512B, DISC-MAX 2T, DISC-ZERO 0)
So it seems like there's a bug either in the ssd firmware or in the kernel..?
The issue (at the end of the first 4GiB) goes away if I:
blkdiscard -v -o 0 -l $[2<<30] /dev/nvme0n1
blkdiscard -v -o $[2<<30] -l $[2<<30] /dev/nvme0n1
or (for full drive):
blkdiscard -v -p $[2<<30] /dev/nvme0n1
Looking (but not too deeply) at the kernel code, I don't see any
obvious red herring, but maybe something should just limit to 2GiB?
Although __blkdev_issue_discard's:
bio->bi_iter.bi_size = req_sects << 9;
seems odd considering
struct bvec_iter {
sector_t bi_sector; /* device address in 512 byte sectors */
unsigned int bi_size; /* residual I/O count */
which should imply 4GiB would need to be stored as a u32 bi_size of 0...
But I do networking, so maybe I'm missing some block/fs convention...
Or maybe this is already splitting into smaller pieces, and then
coallescing, or who knows what...
--
Maciej Żenczykowski, Kernel Networking Developer @ Google
Powered by blists - more mailing lists