[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201221093651.44ff4195@carbon>
Date: Mon, 21 Dec 2020 09:36:51 +0100
From: Jesper Dangaard Brouer <brouer@...hat.com>
To: Lorenzo Bianconi <lorenzo@...nel.org>
Cc: bpf@...r.kernel.org, netdev@...r.kernel.org, davem@...emloft.net,
kuba@...nel.org, ast@...nel.org, daniel@...earbox.net,
lorenzo.bianconi@...hat.com, alexander.duyck@...il.com,
maciej.fijalkowski@...el.com, saeed@...nel.org, brouer@...hat.com
Subject: Re: [PATCH v4 bpf-next 1/2] net: xdp: introduce xdp_init_buff
utility routine
On Sat, 19 Dec 2020 18:55:00 +0100
Lorenzo Bianconi <lorenzo@...nel.org> wrote:
> diff --git a/include/net/xdp.h b/include/net/xdp.h
> index 11ec93f827c0..323340caef88 100644
> --- a/include/net/xdp.h
> +++ b/include/net/xdp.h
> @@ -76,6 +76,13 @@ struct xdp_buff {
> u32 frame_sz; /* frame size to deduce data_hard_end/reserved tailroom*/
> };
>
> +static __always_inline void
> +xdp_init_buff(struct xdp_buff *xdp, u32 frame_sz, struct xdp_rxq_info *rxq)
> +{
> + xdp->frame_sz = frame_sz;
> + xdp->rxq = rxq;
Later you will add 'xdp->mb = 0' here.
> +}
Via the names of your functions, I assume that xdp_init_buff() is
called before xdp_prepare_buff(), right?
(And your pending 'xdp->mb = 0' also prefer this.)
Below in bpf_prog_test_run_xdp() and netif_receive_generic_xdp() you
violate this order... which will give you headaches when implementing
the multi-buff support. It is also a bad example for driver developer
that need to figure out this calling-order from the function names.
Below, will it be possible to have 'init' before 'prepare'?
> +
> /* Reserve memory area at end-of data area.
> *
> * This macro reserves tailroom in the XDP buffer by limiting the
> diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
> index c1c30a9f76f3..a8fa5a9e4137 100644
> --- a/net/bpf/test_run.c
> +++ b/net/bpf/test_run.c
> @@ -640,10 +640,10 @@ int bpf_prog_test_run_xdp(struct bpf_prog *prog, const union bpf_attr *kattr,
> xdp.data = data + headroom;
> xdp.data_meta = xdp.data;
> xdp.data_end = xdp.data + size;
> - xdp.frame_sz = headroom + max_data_sz + tailroom;
>
> rxqueue = __netif_get_rx_queue(current->nsproxy->net_ns->loopback_dev, 0);
> - xdp.rxq = &rxqueue->xdp_rxq;
> + xdp_init_buff(&xdp, headroom + max_data_sz + tailroom,
> + &rxqueue->xdp_rxq);
> bpf_prog_change_xdp(NULL, prog);
> ret = bpf_test_run(prog, &xdp, repeat, &retval, &duration, true);
> if (ret)
> diff --git a/net/core/dev.c b/net/core/dev.c
> index a46334906c94..b1a765900c01 100644
> --- a/net/core/dev.c
> +++ b/net/core/dev.c
> @@ -4588,11 +4588,11 @@ static u32 netif_receive_generic_xdp(struct sk_buff *skb,
> struct netdev_rx_queue *rxqueue;
> void *orig_data, *orig_data_end;
> u32 metalen, act = XDP_DROP;
> + u32 mac_len, frame_sz;
> __be16 orig_eth_type;
> struct ethhdr *eth;
> bool orig_bcast;
> int hlen, off;
> - u32 mac_len;
>
> /* Reinjected packets coming from act_mirred or similar should
> * not get XDP generic processing.
> @@ -4631,8 +4631,8 @@ static u32 netif_receive_generic_xdp(struct sk_buff *skb,
> xdp->data_hard_start = skb->data - skb_headroom(skb);
>
> /* SKB "head" area always have tailroom for skb_shared_info */
> - xdp->frame_sz = (void *)skb_end_pointer(skb) - xdp->data_hard_start;
> - xdp->frame_sz += SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
> + frame_sz = (void *)skb_end_pointer(skb) - xdp->data_hard_start;
> + frame_sz += SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
>
> orig_data_end = xdp->data_end;
> orig_data = xdp->data;
> @@ -4641,7 +4641,7 @@ static u32 netif_receive_generic_xdp(struct sk_buff *skb,
> orig_eth_type = eth->h_proto;
>
> rxqueue = netif_get_rxqueue(skb);
> - xdp->rxq = &rxqueue->xdp_rxq;
> + xdp_init_buff(xdp, frame_sz, &rxqueue->xdp_rxq);
>
> act = bpf_prog_run_xdp(xdp_prog, xdp);
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
Powered by blists - more mailing lists