[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2d3c8287-55cf-0150-acd6-19feb9e85771@gmail.com>
Date: Thu, 30 Jul 2020 18:57:47 +0300
From: Pavel Begunkov <asml.silence@...il.com>
To: Kanchan Joshi <joshi.k@...sung.com>, axboe@...nel.dk,
viro@...iv.linux.org.uk, bcrl@...ck.org
Cc: willy@...radead.org, hch@...radead.org, Damien.LeMoal@....com,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-aio@...ck.org, io-uring@...r.kernel.org,
linux-block@...r.kernel.org, linux-api@...r.kernel.org,
SelvaKumar S <selvakuma.s1@...sung.com>,
Nitesh Shetty <nj.shetty@...sung.com>,
Javier Gonzalez <javier.gonz@...sung.com>
Subject: Re: [PATCH v4 6/6] io_uring: add support for zone-append
On 24/07/2020 18:49, Kanchan Joshi wrote:
> From: SelvaKumar S <selvakuma.s1@...sung.com>
>
> Repurpose [cqe->res, cqe->flags] into cqe->res64 (signed) to report
> 64bit written-offset for zone-append. The appending-write which requires
> reporting written-location (conveyed by IOCB_ZONE_APPEND flag) is
> ensured not to be a short-write; this avoids the need to report
> number-of-bytes-copied.
> append-offset is returned by lower-layer to io-uring via ret2 of
> ki_complete interface. Make changes to collect it and send to user-space
> via cqe->res64.
>
> Signed-off-by: SelvaKumar S <selvakuma.s1@...sung.com>
> Signed-off-by: Kanchan Joshi <joshi.k@...sung.com>
> Signed-off-by: Nitesh Shetty <nj.shetty@...sung.com>
> Signed-off-by: Javier Gonzalez <javier.gonz@...sung.com>
> ---
> fs/io_uring.c | 49 ++++++++++++++++++++++++++++++++++++-------
> include/uapi/linux/io_uring.h | 9 ++++++--
> 2 files changed, 48 insertions(+), 10 deletions(-)
>
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index 7809ab2..6510cf5 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
...
> @@ -1244,8 +1254,15 @@ static bool io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force)
> req->flags &= ~REQ_F_OVERFLOW;
> if (cqe) {
> WRITE_ONCE(cqe->user_data, req->user_data);
> - WRITE_ONCE(cqe->res, req->result);
> - WRITE_ONCE(cqe->flags, req->cflags);
> + if (unlikely(req->flags & REQ_F_ZONE_APPEND)) {
> + if (likely(req->result > 0))
> + WRITE_ONCE(cqe->res64, req->rw.append_offset);
> + else
> + WRITE_ONCE(cqe->res64, req->result);
> + } else {
> + WRITE_ONCE(cqe->res, req->result);
> + WRITE_ONCE(cqe->flags, req->cflags);
> + }
> } else {
> WRITE_ONCE(ctx->rings->cq_overflow,
> atomic_inc_return(&ctx->cached_cq_overflow));
> @@ -1284,8 +1301,15 @@ static void __io_cqring_fill_event(struct io_kiocb *req, long res, long cflags)
> cqe = io_get_cqring(ctx);
> if (likely(cqe)) {
> WRITE_ONCE(cqe->user_data, req->user_data);
> - WRITE_ONCE(cqe->res, res);
> - WRITE_ONCE(cqe->flags, cflags);
> + if (unlikely(req->flags & REQ_F_ZONE_APPEND)) {
> + if (likely(res > 0))
> + WRITE_ONCE(cqe->res64, req->rw.append_offset);
1. as I mentioned before, that's not not nice to ignore @cflags
2. that's not the right place for opcode specific handling
3. it doesn't work with overflowed reqs, see the final else below
For this scheme, I'd pass @append_offset as an argument. That should
also remove this extra if from the fast path, which Jens mentioned.
> + else
> + WRITE_ONCE(cqe->res64, res);
> + } else {
> + WRITE_ONCE(cqe->res, res);
> + WRITE_ONCE(cqe->flags, cflags);
> + }
> } else if (ctx->cq_overflow_flushed) {
> WRITE_ONCE(ctx->rings->cq_overflow,
> atomic_inc_return(&ctx->cached_cq_overflow));
> @@ -1943,7 +1967,7 @@ static inline void req_set_fail_links(struct io_kiocb *req)
> req->flags |= REQ_F_FAIL_LINK;
> }
>
--
Pavel Begunkov
Powered by blists - more mailing lists