[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aJDohO7v7lMWxn7V@kbusch-mbp>
Date: Mon, 4 Aug 2025 11:06:12 -0600
From: Keith Busch <kbusch@...nel.org>
To: Jens Axboe <axboe@...nel.dk>
Cc: Keith Busch <kbusch@...a.com>, linux-block@...r.kernel.org,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
snitzer@...nel.org, dw@...idwei.uk, brauner@...nel.org
Subject: Re: [PATCH 0/7] direct-io: even more flexible io vectors
On Sat, Aug 02, 2025 at 09:37:32AM -0600, Jens Axboe wrote:
> Did you write some test cases for this?
I have some crude unit tests to hit specific conditions that might
happen with nvme.
Note, the "second" test here will fail with the wrong result with this
version of the patchset due to the issue I mentioned on patch 2, but
I've a fix for it ready for the next version.
---
/*
* This test is aligned to NVMe's PRP virtual boundary. It is intended to
* execute on such a device with 4k formatted logical block size.
*
* The first test will submit a vectored read with a total size aligned to a 4k
* block, but individual vectors may not be. This should be successful.
*
* The second test will submit a vectored read with a total size aligned to a
* 4k block, but the first vector contains an invalid address. This should get
* EFAULT.
*
* The third one will submit an IO with a total size aligned to a 4k block,
* but it will fail the virtual boundary condition, which should result in a
* split to a 0 length bio. This should get an EINVAL.
*
* The fourth test will submit IO with a total size aligned to a 4k block, but
* with invalid DMA offsets. This should get an EINVAL.
*
* The last test will submit a large IO with a page offset that should exceed
* the bio max vectors limit, resulting in reverting part of a bio iteration.
* This should be successful.
*/
#define _GNU_SOURCE
#include <fcntl.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <errno.h>
#include <sys/uio.h>
#include <string.h>
#define BSIZE (8 * 1024 * 1024)
#define VECS 4
int main(int argc, char **argv)
{
int fd, ret, i, j;
struct iovec iov[VECS];
char *buf;
if (argc < 2)
return -1;
fd = open(argv[1], O_RDONLY | O_DIRECT);
if (fd < 0)
return fd;
ret = posix_memalign((void **)&buf, 4096, BSIZE);
if (ret)
return ret;
memset(buf, 0, BSIZE);
iov[0].iov_base = buf + 3072;
iov[0].iov_len = 1024;
iov[1].iov_base = buf + (2 * 4096);
iov[1].iov_len = 4096;
iov[2].iov_base = buf + (8 * 4096);
iov[2].iov_len = 4096;
iov[3].iov_base = buf + (16 * 4096);
iov[3].iov_len = 3072;
ret = preadv(fd, iov, VECS, 0);
if (ret < 0)
perror("unexpected read failure");
iov[0].iov_base = 0;
ret = preadv(fd, iov, VECS, 0);
if (ret < 0)
perror("expected read failure for invalid address");
iov[0].iov_base = buf;
iov[0].iov_len = 1024;
iov[1].iov_base = buf + (2 * 4096);
iov[1].iov_len = 1024;
iov[2].iov_base = buf + (8 * 4096);
iov[2].iov_len = 1024;
iov[3].iov_base = buf + (16 * 4096);
iov[3].iov_len = 1024;
ret = preadv(fd, iov, VECS, 0);
if (ret < 0)
perror("expected read for invalid virtual boundary");
iov[0].iov_base = buf + 3072;
iov[0].iov_len = 1025;
iov[1].iov_base = buf + (2 * 4096);
iov[1].iov_len = 4096;
iov[2].iov_base = buf + (8 * 4096);
iov[2].iov_len = 4096;
iov[3].iov_base = buf + (16 * 4096);
iov[3].iov_len = 3073;
ret = preadv(fd, iov, VECS, 0);
if (ret < 0)
perror("expected read for invalid dma boundary");
ret = pread(fd, buf + 2048, BSIZE - 8192, 0);
if (ret < 0)
perror("unexpected large read failure");
free(buf);
return errno;
}
--
Powered by blists - more mailing lists