[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b7135da8-a04f-48ec-957f-09542178b861@ijzerbout.nl>
Date: Fri, 15 Nov 2024 21:01:57 +0100
From: Kees Bakker <kees@...erbout.nl>
To: David Howells <dhowells@...hat.com>,
Christian Brauner <christian@...uner.io>, Steve French <smfrench@...il.com>,
Matthew Wilcox <willy@...radead.org>
Cc: Jeff Layton <jlayton@...nel.org>, Gao Xiang
<hsiangkao@...ux.alibaba.com>, Dominique Martinet <asmadeus@...ewreck.org>,
Marc Dionne <marc.dionne@...istor.com>, Paulo Alcantara <pc@...guebit.com>,
Shyam Prasad N <sprasad@...rosoft.com>, Tom Talpey <tom@...pey.com>,
Eric Van Hensbergen <ericvh@...nel.org>, Ilya Dryomov <idryomov@...il.com>,
netfs@...ts.linux.dev, linux-afs@...ts.infradead.org,
linux-cifs@...r.kernel.org, linux-nfs@...r.kernel.org,
ceph-devel@...r.kernel.org, v9fs@...ts.linux.dev,
linux-erofs@...ts.ozlabs.org, linux-fsdevel@...r.kernel.org,
linux-mm@...ck.org, netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 07/33] netfs: Abstract out a rolling folio buffer
implementation
Op 08-11-2024 om 18:32 schreef David Howells:
> A rolling buffer is a series of folios held in a list of folio_queues. New
> folios and folio_queue structs may be inserted at the head simultaneously
> with spent ones being removed from the tail without the need for locking.
>
> The rolling buffer includes an iov_iter and it has to be careful managing
> this as the list of folio_queues is extended such that an oops doesn't
> incurred because the iterator was pointing to the end of a folio_queue
> segment that got appended to and then removed.
>
> We need to use the mechanism twice, once for read and once for write, and,
> in future patches, we will use a second rolling buffer to handle bounce
> buffering for content encryption.
>
> Signed-off-by: David Howells <dhowells@...hat.com>
> cc: Jeff Layton <jlayton@...nel.org>
> cc: netfs@...ts.linux.dev
> cc: linux-fsdevel@...r.kernel.org
> ---
> fs/netfs/Makefile | 1 +
> fs/netfs/buffered_read.c | 119 ++++-------------
> fs/netfs/direct_read.c | 14 +-
> fs/netfs/direct_write.c | 10 +-
> fs/netfs/internal.h | 4 -
> fs/netfs/misc.c | 147 ---------------------
> fs/netfs/objects.c | 2 +-
> fs/netfs/read_pgpriv2.c | 32 ++---
> fs/netfs/read_retry.c | 2 +-
> fs/netfs/rolling_buffer.c | 225 +++++++++++++++++++++++++++++++++
> fs/netfs/write_collect.c | 19 +--
> fs/netfs/write_issue.c | 26 ++--
> include/linux/netfs.h | 10 +-
> include/linux/rolling_buffer.h | 61 +++++++++
> include/trace/events/netfs.h | 2 +
> 15 files changed, 375 insertions(+), 299 deletions(-)
> create mode 100644 fs/netfs/rolling_buffer.c
> create mode 100644 include/linux/rolling_buffer.h
> [...]
> diff --git a/fs/netfs/direct_write.c b/fs/netfs/direct_write.c
> index 88f2adfab75e..0722fb9919a3 100644
> --- a/fs/netfs/direct_write.c
> +++ b/fs/netfs/direct_write.c
> @@ -68,19 +68,19 @@ ssize_t netfs_unbuffered_write_iter_locked(struct kiocb *iocb, struct iov_iter *
> * request.
> */
> if (async || user_backed_iter(iter)) {
> - n = netfs_extract_user_iter(iter, len, &wreq->iter, 0);
> + n = netfs_extract_user_iter(iter, len, &wreq->buffer.iter, 0);
> if (n < 0) {
> ret = n;
> goto out;
> }
> - wreq->direct_bv = (struct bio_vec *)wreq->iter.bvec;
> + wreq->direct_bv = (struct bio_vec *)wreq->buffer.iter.bvec;
> wreq->direct_bv_count = n;
> wreq->direct_bv_unpin = iov_iter_extract_will_pin(iter);
> } else {
> - wreq->iter = *iter;
> + wreq->buffer.iter = *iter;
> }
>
> - wreq->io_iter = wreq->iter;
> + wreq->buffer.iter = wreq->buffer.iter;
Is this correct, an assignment to itself?
> }
>
> __set_bit(NETFS_RREQ_USE_IO_ITER, &wreq->flags);
> [...]
Powered by blists - more mailing lists