[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0cdf2aac-6364-742d-debb-cfd58b4c6f2b@gmail.com>
Date: Sun, 20 Dec 2020 15:18:05 +0000
From: Pavel Begunkov <asml.silence@...il.com>
To: noah <goldstein.w.n@...il.com>
Cc: noah <goldstein.n@...tl.edu>, Jens Axboe <axboe@...nel.dk>,
Alexander Viro <viro@...iv.linux.org.uk>,
io-uring@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] fs: io_uring.c: Add skip option for __io_sqe_files_update
On 20/12/2020 06:50, noah wrote:> From: noah <goldstein.n@...tl.edu>
>
> This patch makes it so that specify a file descriptor value of -2 will
> skip updating the corresponding fixed file index.
>
> This will allow for users to reduce the number of syscalls necessary
> to update a sparse file range when using the fixed file option.
Answering the github thread -- it's indeed a simple change, I had it the
same day you posted the issue. See below it's a bit cleaner. However, I
want to first review "io_uring: buffer registration enhancements", and
if it's good, for easier merging/etc I'd rather prefer to let it go
first (even if partially).
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 941fe9b64fd9..b3ae9d5da17e 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -7847,9 +7847,8 @@ static int __io_sqe_files_update(struct io_ring_ctx *ctx,
if (IS_ERR(ref_node))
return PTR_ERR(ref_node);
- done = 0;
fds = u64_to_user_ptr(up->fds);
- while (nr_args) {
+ for (done = 0; done < nr_args; done++) {
struct fixed_file_table *table;
unsigned index;
@@ -7858,7 +7857,10 @@ static int __io_sqe_files_update(struct io_ring_ctx *ctx,
err = -EFAULT;
break;
}
- i = array_index_nospec(up->offset, ctx->nr_user_files);
+ if (fd == IORING_REGISTER_FILES_SKIP)
+ continue;
+
+ i = array_index_nospec(up->offset + done, ctx->nr_user_files);
table = &ctx->file_data->table[i >> IORING_FILE_TABLE_SHIFT];
index = i & IORING_FILE_TABLE_MASK;
if (table->files[index]) {
@@ -7896,9 +7898,6 @@ static int __io_sqe_files_update(struct io_ring_ctx *ctx,
break;
}
}
- nr_args--;
- done++;
- up->offset++;
}
if (needs_switch) {
--
Pavel Begunkov
Powered by blists - more mailing lists