[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJ8uoz0uaztjQ7dBrrnzJw5ghXV4uZ8GWjMaTd9GOR_FCKjo0g@mail.gmail.com>
Date: Thu, 14 Jul 2022 14:39:31 +0200
From: Magnus Karlsson <magnus.karlsson@...il.com>
To: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
Cc: bpf <bpf@...r.kernel.org>, Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Andrii Nakryiko <andrii@...nel.org>,
Network Development <netdev@...r.kernel.org>,
"Karlsson, Magnus" <magnus.karlsson@...el.com>,
Björn Töpel <bjorn@...nel.org>,
Jakub Kicinski <kuba@...nel.org>
Subject: Re: [PATCH v2 bpf-next] xsk: mark napi_id on sendmsg()
On Thu, Jul 7, 2022 at 3:20 PM Maciej Fijalkowski
<maciej.fijalkowski@...el.com> wrote:
>
> When application runs in busy poll mode and does not receive a single
> packet but only sends them, it is currently
> impossible to get into napi_busy_loop() as napi_id is only marked on Rx
> side in xsk_rcv_check(). In there, napi_id is being taken from
> xdp_rxq_info carried by xdp_buff. From Tx perspective, we do not have
> access to it. What we have handy is the xsk pool.
>
> Xsk pool works on a pool of internal xdp_buff wrappers called
> xdp_buff_xsk. AF_XDP ZC enabled drivers call xp_set_rxq_info() so each
> of xdp_buff_xsk has a valid pointer to xdp_rxq_info of underlying queue.
> Therefore, on Tx side, napi_id can be pulled from
> xs->pool->heads[0].xdp.rxq->napi_id. Hide this pointer chase under
> helper function, xsk_pool_get_napi_id().
>
> Do this only for sockets working in ZC mode as otherwise rxq pointers
> would not be initialized.
Thanks Maciej.
Acked-by: Magnus Karlsson <magnus.karlsson@...el.com>
> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
> ---
>
> v2:
> * target bpf-next instead of bpf and don't treat it as fix (Bjorn)
> * hide pointer chasing under helper function (Bjorn)
>
> include/net/xdp_sock_drv.h | 14 ++++++++++++++
> net/xdp/xsk.c | 5 ++++-
> 2 files changed, 18 insertions(+), 1 deletion(-)
>
> diff --git a/include/net/xdp_sock_drv.h b/include/net/xdp_sock_drv.h
> index 4aa031849668..4277b0dcee05 100644
> --- a/include/net/xdp_sock_drv.h
> +++ b/include/net/xdp_sock_drv.h
> @@ -44,6 +44,15 @@ static inline void xsk_pool_set_rxq_info(struct xsk_buff_pool *pool,
> xp_set_rxq_info(pool, rxq);
> }
>
> +static inline unsigned int xsk_pool_get_napi_id(struct xsk_buff_pool *pool)
> +{
> +#ifdef CONFIG_NET_RX_BUSY_POLL
> + return pool->heads[0].xdp.rxq->napi_id;
> +#else
> + return 0;
> +#endif
> +}
> +
> static inline void xsk_pool_dma_unmap(struct xsk_buff_pool *pool,
> unsigned long attrs)
> {
> @@ -198,6 +207,11 @@ static inline void xsk_pool_set_rxq_info(struct xsk_buff_pool *pool,
> {
> }
>
> +static inline unsigned int xsk_pool_get_napi_id(struct xsk_buff_pool *pool)
> +{
> + return 0;
> +}
> +
> static inline void xsk_pool_dma_unmap(struct xsk_buff_pool *pool,
> unsigned long attrs)
> {
> diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
> index 19ac872a6624..86a97da7e50b 100644
> --- a/net/xdp/xsk.c
> +++ b/net/xdp/xsk.c
> @@ -637,8 +637,11 @@ static int __xsk_sendmsg(struct socket *sock, struct msghdr *m, size_t total_len
> if (unlikely(need_wait))
> return -EOPNOTSUPP;
>
> - if (sk_can_busy_loop(sk))
> + if (sk_can_busy_loop(sk)) {
> + if (xs->zc)
> + __sk_mark_napi_id_once(sk, xsk_pool_get_napi_id(xs->pool));
> sk_busy_loop(sk, 1); /* only support non-blocking sockets */
> + }
>
> if (xs->zc && xsk_no_wakeup(sk))
> return 0;
> --
> 2.27.0
>
Powered by blists - more mailing lists