[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20211105162941.46b807e5@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
Date: Fri, 5 Nov 2021 16:29:41 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: Lorenzo Bianconi <lorenzo@...nel.org>
Cc: bpf@...r.kernel.org, netdev@...r.kernel.org,
lorenzo.bianconi@...hat.com, davem@...emloft.net, ast@...nel.org,
daniel@...earbox.net, shayagr@...zon.com, john.fastabend@...il.com,
dsahern@...nel.org, brouer@...hat.com, echaudro@...hat.com,
jasowang@...hat.com, alexander.duyck@...il.com, saeed@...nel.org,
maciej.fijalkowski@...el.com, magnus.karlsson@...el.com,
tirthendu.sarkar@...el.com, toke@...hat.com
Subject: Re: [PATCH v17 bpf-next 12/23] bpf: add multi-buff support to the
bpf_xdp_adjust_tail() API
On Thu, 4 Nov 2021 18:35:32 +0100 Lorenzo Bianconi wrote:
> This change adds support for tail growing and shrinking for XDP multi-buff.
>
> When called on a multi-buffer packet with a grow request, it will always
> work on the last fragment of the packet. So the maximum grow size is the
> last fragments tailroom, i.e. no new buffer will be allocated.
>
> When shrinking, it will work from the last fragment, all the way down to
> the base buffer depending on the shrinking size. It's important to mention
> that once you shrink down the fragment(s) are freed, so you can not grow
> again to the original size.
> +static int bpf_xdp_mb_increase_tail(struct xdp_buff *xdp, int offset)
> +{
> + struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
> + skb_frag_t *frag = &sinfo->frags[sinfo->nr_frags - 1];
> + int size, tailroom;
> +
> + tailroom = xdp->frame_sz - skb_frag_size(frag) - skb_frag_off(frag);
I know I complained about this before but the assumption that we can
use all the space up to xdp->frame_sz makes me uneasy.
Drivers may not expect the idea that core may decide to extend the
last frag.. I don't think the skb path would ever do this.
How do you feel about any of these options:
- dropping this part for now (return an error for increase)
- making this an rxq flag or reading the "reserved frag size"
from rxq (so that drivers explicitly opt-in)
- adding a test that can be run on real NICs
?
> +static int bpf_xdp_mb_shrink_tail(struct xdp_buff *xdp, int offset)
> +{
> + struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
> + int i, n_frags_free = 0, len_free = 0, tlen_free = 0;
> +
> + if (unlikely(offset > ((int)xdp_get_buff_len(xdp) - ETH_HLEN)))
nit: outer parens unnecessary
> + return -EINVAL;
> @@ -371,6 +371,7 @@ static void __xdp_return(void *data, struct xdp_mem_info *mem, bool napi_direct,
> break;
> }
> }
> +EXPORT_SYMBOL_GPL(__xdp_return);
Why the export?
Powered by blists - more mailing lists