[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210105122717.2568-1-minwoo.im.dev@gmail.com>
Date: Tue, 5 Jan 2021 21:27:16 +0900
From: Minwoo Im <minwoo.im.dev@...il.com>
To: linux-block@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-fsdevel@...r.kernel.org, linux-nvme@...ts.infradead.org
Cc: Jens Axboe <axboe@...nel.dk>,
Alexander Viro <viro@...iv.linux.org.uk>,
Christoph Hellwig <hch@....de>,
Chaitanya Kulkarni <Chaitanya.Kulkarni@....com>,
Minwoo Im <minwoo.im.dev@...il.com>
Subject: [PATCH V4 0/1] block: fix I/O errors in BLKRRPART
Hello,
This patch fixes I/O errors during BLKRRPART ioctl() behavior right
after format operation that changed logical block size of the block
device with a same file descriptor opened.
Testcase:
The following testcase is a case of NVMe namespace with the following
conditions:
- Current LBA format is lbaf=0 (512 bytes logical block size)
- LBA Format(lbaf=1) has 4096 bytes logical block size
# Format block device logical block size 512B to 4096B
nvme format /dev/nvme0n1 --lbaf=1 --force
This will cause I/O errors because BLKRRPART ioctl() happened right after
the format command with same file descriptor opened in application
(e.g., nvme-cli) like:
fd = open("/dev/nvme0n1", O_RDONLY);
nvme_format(fd, ...);
if (ioctl(fd, BLKRRPART) < 0)
...
Errors:
We can see the Read command with Number of LBA(NLB) 0xffff(65535) which
was under-flowed because BLKRRPART operation requested request size based
on i_blkbits of the block device which is 9 via buffer_head.
[dmesg-snip]
[ 10.771740] blk_update_request: operation not supported error, dev nvme0n1, sector 0 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
[ 10.780262] Buffer I/O error on dev nvme0n1, logical block 0, async page read
[event-snip]
kworker/0:1H-56 [000] .... 913.456922: nvme_setup_cmd: nvme0: disk=nvme0n1, qid=1, cmdid=216, nsid=1, flags=0x0, meta=0x0, cmd=(nvme_cmd_read slba=0, len=65535, ctrl=0x0, dsmgmt=0, reftag=0)
ksoftirqd/0-9 [000] .Ns. 916.566351: nvme_complete_rq: nvme0: disk=nvme0n1, qid=1, cmdid=216, res=0x0, retries=0, flags=0x0, status=0x4002
The patch below fixes the I/O errors by rejecting I/O requests from the
block layer with setting a flag to request_queue until the file descriptor
re-opened to be updated by __blkdev_get(). This is based on the previous
discussion [1].
Since V3(RFC):
- Move flag from gendisk to request_queue for future clean-ups.
(Christoph, [3])
Since V2(RFC):
- Cover letter with testcase and error logs attached. Removed un-related
changes: empty line. (Chaitanya, [2])
- Put blkdev with blkdev_put_no_open().
Since V1(RFC):
- Updated patch to reject I/O rather than updating i_blkbits of the
block device's inode directly from driver. (Christoph, [1])
[1] https://lore.kernel.org/linux-nvme/20201223183143.GB13354@localhost.localdomain/T/#t
[2] https://lore.kernel.org/linux-nvme/20201230140504.GB7917@localhost.localdomain/T/#t
[3] https://lore.kernel.org/linux-block/20210105101202.GA9970@localhost.localdomain/T/#u
Thanks,
Minwoo Im (1):
block: reject I/O for same fd if block size changed
block/blk-settings.c | 3 +++
block/partitions/core.c | 12 ++++++++++++
fs/block_dev.c | 8 ++++++++
include/linux/blkdev.h | 1 +
4 files changed, 24 insertions(+)
--
2.17.1
Powered by blists - more mailing lists