[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230522135018.2742245-5-dhowells@redhat.com>
Date: Mon, 22 May 2023 14:49:51 +0100
From: David Howells <dhowells@...hat.com>
To: Jens Axboe <axboe@...nel.dk>, Al Viro <viro@...iv.linux.org.uk>,
Christoph Hellwig <hch@...radead.org>
Cc: David Howells <dhowells@...hat.com>,
Matthew Wilcox <willy@...radead.org>, Jan Kara <jack@...e.cz>,
Jeff Layton <jlayton@...nel.org>,
David Hildenbrand <david@...hat.com>,
Jason Gunthorpe <jgg@...dia.com>,
Logan Gunthorpe <logang@...tatee.com>,
Hillf Danton <hdanton@...a.com>,
Christian Brauner <brauner@...nel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
linux-fsdevel@...r.kernel.org, linux-block@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Christoph Hellwig <hch@....de>,
John Hubbard <jhubbard@...dia.com>
Subject: [PATCH v22 04/31] splice: Clean up copy_splice_read() a bit
Do a couple of cleanups to copy_splice_read():
(1) Cast to struct page **, not void *.
(2) Simplify the calculation of the number of pages to keep/reclaim in
copy_splice_read().
Suggested-by: Christoph Hellwig <hch@...radead.org>
Signed-off-by: David Howells <dhowells@...hat.com>
Reviewed-by: Christoph Hellwig <hch@....de>
Reviewed-by: Christian Brauner <brauner@...nel.org>
cc: Jens Axboe <axboe@...nel.dk>
cc: Al Viro <viro@...iv.linux.org.uk>
cc: David Hildenbrand <david@...hat.com>
cc: John Hubbard <jhubbard@...dia.com>
cc: linux-mm@...ck.org
cc: linux-block@...r.kernel.org
cc: linux-fsdevel@...r.kernel.org
---
Notes:
ver #21)
- direct_splice_read() got renamed to copy_splice_read().
fs/splice.c | 19 +++++++------------
1 file changed, 7 insertions(+), 12 deletions(-)
diff --git a/fs/splice.c b/fs/splice.c
index 2478e065bc53..f9a9be797b0c 100644
--- a/fs/splice.c
+++ b/fs/splice.c
@@ -311,7 +311,7 @@ ssize_t copy_splice_read(struct file *in, loff_t *ppos,
struct kiocb kiocb;
struct page **pages;
ssize_t ret;
- size_t used, npages, chunk, remain, reclaim;
+ size_t used, npages, chunk, remain, keep = 0;
int i;
/* Work out how much data we can actually add into the pipe */
@@ -325,7 +325,7 @@ ssize_t copy_splice_read(struct file *in, loff_t *ppos,
if (!bv)
return -ENOMEM;
- pages = (void *)(bv + npages);
+ pages = (struct page **)(bv + npages);
npages = alloc_pages_bulk_array(GFP_USER, npages, pages);
if (!npages) {
kfree(bv);
@@ -348,11 +348,8 @@ ssize_t copy_splice_read(struct file *in, loff_t *ppos,
kiocb.ki_pos = *ppos;
ret = call_read_iter(in, &kiocb, &to);
- reclaim = npages * PAGE_SIZE;
- remain = 0;
if (ret > 0) {
- reclaim -= ret;
- remain = ret;
+ keep = DIV_ROUND_UP(ret, PAGE_SIZE);
*ppos = kiocb.ki_pos;
file_accessed(in);
} else if (ret < 0) {
@@ -365,14 +362,12 @@ ssize_t copy_splice_read(struct file *in, loff_t *ppos,
}
/* Free any pages that didn't get touched at all. */
- reclaim /= PAGE_SIZE;
- if (reclaim) {
- npages -= reclaim;
- release_pages(pages + npages, reclaim);
- }
+ if (keep < npages)
+ release_pages(pages + keep, npages - keep);
/* Push the remaining pages into the pipe. */
- for (i = 0; i < npages; i++) {
+ remain = ret;
+ for (i = 0; i < keep; i++) {
struct pipe_buffer *buf = pipe_head_buf(pipe);
chunk = min_t(size_t, remain, PAGE_SIZE);
Powered by blists - more mailing lists