[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180723182247.494ccb07@cakuba.netronome.com>
Date: Mon, 23 Jul 2018 18:22:47 -0700
From: Jakub Kicinski <jakub.kicinski@...ronome.com>
To: Toshiaki Makita <toshiaki.makita1@...il.com>
Cc: netdev@...r.kernel.org, Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Toshiaki Makita <makita.toshiaki@....ntt.co.jp>,
Jesper Dangaard Brouer <brouer@...hat.com>
Subject: Re: [PATCH v3 bpf-next 6/8] xdp: Add a flag for disabling
napi_direct of xdp_return_frame in xdp_mem_info
On Mon, 23 Jul 2018 00:13:06 +0900, Toshiaki Makita wrote:
> From: Toshiaki Makita <makita.toshiaki@....ntt.co.jp>
>
> We need some mechanism to disable napi_direct on calling
> xdp_return_frame_rx_napi() from some context.
> When veth gets support of XDP_REDIRECT, it will redirects packets which
> are redirected from other devices. On redirection veth will reuse
> xdp_mem_info of the redirection source device to make return_frame work.
> But in this case .ndo_xdp_xmit() called from veth redirection uses
> xdp_mem_info which is not guarded by NAPI, because the .ndo_xdp_xmit is
> not called directly from the rxq which owns the xdp_mem_info.
>
> This approach introduces a flag in xdp_mem_info to indicate that
> napi_direct should be disabled even when _rx_napi variant is used.
>
> Signed-off-by: Toshiaki Makita <makita.toshiaki@....ntt.co.jp>
To be clear - you will modify flags of the original source device if it
ever redirected a frame to a software device like veth? Seems a bit
heavy handed. The xdp_return_frame_rx_napi() is only really used on
error paths, but still.. Also as you note the original NAPI can run
concurrently with your veth dest one, but also with NAPIs of other veth
devices, so the non-atomic xdp.rxq->mem.flags |= XDP_MEM_RF_NO_DIRECT;
makes me worried.
Would you mind elaborating why not handle the RX completely in the NAPI
context of the original device?
> diff --git a/include/net/xdp.h b/include/net/xdp.h
> index fcb033f51d8c..1d1bc6553ff2 100644
> --- a/include/net/xdp.h
> +++ b/include/net/xdp.h
> @@ -41,6 +41,9 @@ enum xdp_mem_type {
> MEM_TYPE_MAX,
> };
>
> +/* XDP flags for xdp_mem_info */
> +#define XDP_MEM_RF_NO_DIRECT BIT(0) /* don't use napi_direct */
> +
> /* XDP flags for ndo_xdp_xmit */
> #define XDP_XMIT_FLUSH (1U << 0) /* doorbell signal consumer */
> #define XDP_XMIT_FLAGS_MASK XDP_XMIT_FLUSH
> @@ -48,6 +51,7 @@ enum xdp_mem_type {
> struct xdp_mem_info {
> u32 type; /* enum xdp_mem_type, but known size type */
> u32 id;
> + u32 flags;
> };
>
> struct page_pool;
> diff --git a/net/core/xdp.c b/net/core/xdp.c
> index 57285383ed00..1426c608fd75 100644
> --- a/net/core/xdp.c
> +++ b/net/core/xdp.c
> @@ -330,10 +330,12 @@ static void __xdp_return(void *data, struct xdp_mem_info *mem, bool napi_direct,
> /* mem->id is valid, checked in xdp_rxq_info_reg_mem_model() */
> xa = rhashtable_lookup(mem_id_ht, &mem->id, mem_id_rht_params);
> page = virt_to_head_page(data);
> - if (xa)
> + if (xa) {
> + napi_direct &= !(mem->flags & XDP_MEM_RF_NO_DIRECT);
> page_pool_put_page(xa->page_pool, page, napi_direct);
> - else
> + } else {
> put_page(page);
> + }
> rcu_read_unlock();
> break;
> case MEM_TYPE_PAGE_SHARED:
Powered by blists - more mailing lists