[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <11755fdb-4a28-0ea5-89a4-d51b2715f8c2@samba.org>
Date: Fri, 21 Oct 2022 16:03:54 +0200
From: Stefan Metzmacher <metze@...ba.org>
To: Pavel Begunkov <asml.silence@...il.com>,
io-uring <io-uring@...r.kernel.org>, Jens Axboe <axboe@...nel.dk>
Cc: Jakub Kicinski <kuba@...nel.org>, netdev <netdev@...r.kernel.org>,
Dylan Yudaken <dylany@...com>
Subject: Re: IORING_SEND_NOTIF_REPORT_USAGE (was Re: IORING_CQE_F_COPIED)
Am 21.10.22 um 13:09 schrieb Pavel Begunkov:
> On 10/21/22 10:36, Stefan Metzmacher wrote:
>> Hi Pavel,
> [...]
>>> Right, I'm just tired of back porting patches by hand :)
>>
>> ok, I just assumed it would be 6.1 only.
>
> I'm fine with 6.1 only, it'd make things easier. I thought from
> your first postings you wanted it 6.0. Then we don't need to care
> about the placing of the copied/used flags.
>
>>>> Otherwise we could have IORING_CQE_F_COPIED by default without opt-in
>>>> flag...
>>
>> Do you still want an opt-in flag to get IORING_CQE_F_COPIED?
>> If so what name do you want it to be?
>
> Ala a IORING_SEND_* flag? Yes please.
>
> *_REPORT_USAGE was fine but I'd make it IORING_SEND_ZC_REPORT_USAGE.
> And can be extended if there is more info needed in the future.
>
> And I don't mind using a bit in cqe->res, makes cflags less polluted.
So no worries about the delayed/skip sendmsg completion anymore?
Should I define it like this, ok?
#define IORING_NOTIF_USAGE_ZC_COPIED (1U << 31)
See the full patch below...
metze
diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
index d69ae7eba773..32e1f2a55b70 100644
--- a/include/uapi/linux/io_uring.h
+++ b/include/uapi/linux/io_uring.h
@@ -296,10 +296,24 @@ enum io_uring_op {
*
* IORING_RECVSEND_FIXED_BUF Use registered buffers, the index is stored in
* the buf_index field.
+
+ * IORING_SEND_NOTIF_REPORT_USAGE
+ * If SEND[MSG]_ZC should report
+ * the zerocopy usage in cqe.res
+ * for the IORING_CQE_F_NOTIF cqe.
+ * IORING_NOTIF_USAGE_ZC_COPIED if data was copied
+ * (at least partially).
*/
#define IORING_RECVSEND_POLL_FIRST (1U << 0)
#define IORING_RECV_MULTISHOT (1U << 1)
#define IORING_RECVSEND_FIXED_BUF (1U << 2)
+#define IORING_SEND_ZC_REPORT_USAGE (1U << 3)
+
+/*
+ * cqe.res for IORING_CQE_F_NOTIF if
+ * IORING_SEND_ZC_REPORT_USAGE was requested
+ */
+#define IORING_NOTIF_USAGE_ZC_COPIED (1U << 31)
/*
* accept flags stored in sqe->ioprio
diff --git a/io_uring/net.c b/io_uring/net.c
index 56078f47efe7..1aa3b50b3e82 100644
--- a/io_uring/net.c
+++ b/io_uring/net.c
@@ -939,7 +939,8 @@ int io_send_zc_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
zc->flags = READ_ONCE(sqe->ioprio);
if (zc->flags & ~(IORING_RECVSEND_POLL_FIRST |
- IORING_RECVSEND_FIXED_BUF))
+ IORING_RECVSEND_FIXED_BUF |
+ IORING_SEND_ZC_REPORT_USAGE))
return -EINVAL;
notif = zc->notif = io_alloc_notif(ctx);
if (!notif)
@@ -957,6 +958,9 @@ int io_send_zc_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
req->imu = READ_ONCE(ctx->user_bufs[idx]);
io_req_set_rsrc_node(notif, ctx, 0);
}
+ if (zc->flags & IORING_SEND_ZC_REPORT_USAGE) {
+ io_notif_to_data(notif)->zc_report = true;
+ }
if (req->opcode == IORING_OP_SEND_ZC) {
if (READ_ONCE(sqe->__pad3[0]))
diff --git a/io_uring/notif.c b/io_uring/notif.c
index e37c6569d82e..4bfef10161fa 100644
--- a/io_uring/notif.c
+++ b/io_uring/notif.c
@@ -18,6 +18,10 @@ static void __io_notif_complete_tw(struct io_kiocb *notif, bool *locked)
__io_unaccount_mem(ctx->user, nd->account_pages);
nd->account_pages = 0;
}
+
+ if (nd->zc_report && (nd->zc_copied || !nd->zc_used))
+ notif->cqe.res |= IORING_NOTIF_USAGE_ZC_COPIED;
+
io_req_task_complete(notif, locked);
}
@@ -28,6 +32,13 @@ static void io_uring_tx_zerocopy_callback(struct sk_buff *skb,
struct io_notif_data *nd = container_of(uarg, struct io_notif_data, uarg);
struct io_kiocb *notif = cmd_to_io_kiocb(nd);
+ if (nd->zc_report) {
+ if (success && !nd->zc_used && skb)
+ WRITE_ONCE(nd->zc_used, true);
+ else if (!success && !nd->zc_copied)
+ WRITE_ONCE(nd->zc_copied, true);
+ }
+
if (refcount_dec_and_test(&uarg->refcnt)) {
notif->io_task_work.func = __io_notif_complete_tw;
io_req_task_work_add(notif);
@@ -55,6 +66,7 @@ struct io_kiocb *io_alloc_notif(struct io_ring_ctx *ctx)
nd->account_pages = 0;
nd->uarg.flags = SKBFL_ZEROCOPY_FRAG | SKBFL_DONT_ORPHAN;
nd->uarg.callback = io_uring_tx_zerocopy_callback;
+ nd->zc_report = nd->zc_used = nd->zc_copied = false;
refcount_set(&nd->uarg.refcnt, 1);
return notif;
}
diff --git a/io_uring/notif.h b/io_uring/notif.h
index e4fbcae0f3fd..6be2e5ae8581 100644
--- a/io_uring/notif.h
+++ b/io_uring/notif.h
@@ -15,6 +15,9 @@ struct io_notif_data {
struct file *file;
struct ubuf_info uarg;
unsigned long account_pages;
+ bool zc_report;
+ bool zc_used;
+ bool zc_copied;
};
void io_notif_flush(struct io_kiocb *notif);
Powered by blists - more mailing lists