lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 17 Sep 2022 10:22:08 +0100
From:   Pavel Begunkov <asml.silence@...il.com>
To:     Stefan Metzmacher <metze@...ba.org>, io-uring@...r.kernel.org,
        axboe@...nel.dk
Cc:     Jakub Kicinski <kuba@...nel.org>, netdev@...r.kernel.org
Subject: Re: [PATCH 5/5] io_uring/notif: let userspace know how effective the
 zero copy usage was

On 9/16/22 22:36, Stefan Metzmacher wrote:
> The 2nd cqe for IORING_OP_SEND_ZC has IORING_CQE_F_NOTIF set in cqe->flags
> and it will now have the number of successful completed
> io_uring_tx_zerocopy_callback() callbacks in the lower 31-bits
> of cqe->res, the high bit (0x80000000) is set when
> io_uring_tx_zerocopy_callback() was called with success=false.

It has a couple of problems, and because that "simplify uapi"
patch is transitional it doesn't go well with what I'm queuing
for 6.1, let's hold it for a while.


> If cqe->res is still 0, zero copy wasn't used at all.
> 
> These values give userspace a change to adjust its strategy
> choosing IORING_OP_SEND_ZC or IORING_OP_SEND. And it's a bit
> richer than just a simple SO_EE_CODE_ZEROCOPY_COPIED indication.
> 
> Fixes: b48c312be05e8 ("io_uring/net: simplify zerocopy send user API")
> Fixes: eb315a7d1396b ("tcp: support externally provided ubufs")
> Fixes: 1fd3ae8c906c0 ("ipv6/udp: support externally provided ubufs")
> Fixes: c445f31b3cfaa ("ipv4/udp: support externally provided ubufs")
> Signed-off-by: Stefan Metzmacher <metze@...ba.org>
> Cc: Pavel Begunkov <asml.silence@...il.com>
> Cc: Jens Axboe <axboe@...nel.dk>
> Cc: io-uring@...r.kernel.org
> Cc: Jakub Kicinski <kuba@...nel.org>
> Cc: netdev@...r.kernel.org
> ---
>   io_uring/notif.c      | 18 ++++++++++++++++++
>   net/ipv4/ip_output.c  |  3 ++-
>   net/ipv4/tcp.c        |  2 ++
>   net/ipv6/ip6_output.c |  3 ++-
>   4 files changed, 24 insertions(+), 2 deletions(-)
> 
> diff --git a/io_uring/notif.c b/io_uring/notif.c
> index e37c6569d82e..b07d2a049931 100644
> --- a/io_uring/notif.c
> +++ b/io_uring/notif.c
> @@ -28,7 +28,24 @@ static void io_uring_tx_zerocopy_callback(struct sk_buff *skb,
>   	struct io_notif_data *nd = container_of(uarg, struct io_notif_data, uarg);
>   	struct io_kiocb *notif = cmd_to_io_kiocb(nd);
>   
> +	uarg->zerocopy = uarg->zerocopy & success;
> +
> +	if (success && notif->cqe.res < S32_MAX)
> +		notif->cqe.res++;
> +
>   	if (refcount_dec_and_test(&uarg->refcnt)) {
> +		/*
> +		 * If we hit at least one case that
> +		 * was not able to use zero copy,
> +		 * we set the high bit 0x80000000
> +		 * so that notif->cqe.res < 0, means it was
> +		 * as least copied once.
> +		 *
> +		 * The other 31 bits are the success count.
> +		 */
> +		if (!uarg->zerocopy)
> +			notif->cqe.res |= S32_MIN;
> +
>   		notif->io_task_work.func = __io_notif_complete_tw;
>   		io_req_task_work_add(notif);
>   	}
> @@ -53,6 +70,7 @@ struct io_kiocb *io_alloc_notif(struct io_ring_ctx *ctx)
>   
>   	nd = io_notif_to_data(notif);
>   	nd->account_pages = 0;
> +	nd->uarg.zerocopy = 1;
>   	nd->uarg.flags = SKBFL_ZEROCOPY_FRAG | SKBFL_DONT_ORPHAN;
>   	nd->uarg.callback = io_uring_tx_zerocopy_callback;
>   	refcount_set(&nd->uarg.refcnt, 1);
> diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
> index d7bd1daf022b..4bdea7a4b2f7 100644
> --- a/net/ipv4/ip_output.c
> +++ b/net/ipv4/ip_output.c
> @@ -1032,7 +1032,8 @@ static int __ip_append_data(struct sock *sk,
>   				paged = true;
>   				zc = true;
>   				uarg = msg->msg_ubuf;
> -			}
> +			} else
> +				msg->msg_ubuf->zerocopy = 0;
>   		} else if (sock_flag(sk, SOCK_ZEROCOPY)) {
>   			uarg = msg_zerocopy_realloc(sk, length, skb_zcopy(skb));
>   			if (!uarg)
> diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
> index 970e9a2cca4a..27a22d470741 100644
> --- a/net/ipv4/tcp.c
> +++ b/net/ipv4/tcp.c
> @@ -1231,6 +1231,8 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size)
>   			uarg = msg->msg_ubuf;
>   			net_zcopy_get(uarg);
>   			zc = sk->sk_route_caps & NETIF_F_SG;
> +			if (!zc)
> +				uarg->zerocopy = 0;
>   		} else if (sock_flag(sk, SOCK_ZEROCOPY)) {
>   			uarg = msg_zerocopy_realloc(sk, size, skb_zcopy(skb));
>   			if (!uarg) {
> diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
> index f152e51242cb..d85036e91cf7 100644
> --- a/net/ipv6/ip6_output.c
> +++ b/net/ipv6/ip6_output.c
> @@ -1556,7 +1556,8 @@ static int __ip6_append_data(struct sock *sk,
>   				paged = true;
>   				zc = true;
>   				uarg = msg->msg_ubuf;
> -			}
> +			} else
> +				msg->msg_ubuf->zerocopy = 0;
>   		} else if (sock_flag(sk, SOCK_ZEROCOPY)) {
>   			uarg = msg_zerocopy_realloc(sk, length, skb_zcopy(skb));
>   			if (!uarg)

-- 
Pavel Begunkov

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ