lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <61ad94bde1ea6_50c22081e@john.notmuch>
Date:   Sun, 05 Dec 2021 20:42:37 -0800
From:   John Fastabend <john.fastabend@...il.com>
To:     Lorenzo Bianconi <lorenzo@...nel.org>, bpf@...r.kernel.org,
        netdev@...r.kernel.org
Cc:     lorenzo.bianconi@...hat.com, davem@...emloft.net, kuba@...nel.org,
        ast@...nel.org, daniel@...earbox.net, shayagr@...zon.com,
        john.fastabend@...il.com, dsahern@...nel.org, brouer@...hat.com,
        echaudro@...hat.com, jasowang@...hat.com,
        alexander.duyck@...il.com, saeed@...nel.org,
        maciej.fijalkowski@...el.com, magnus.karlsson@...el.com,
        tirthendu.sarkar@...el.com, toke@...hat.com
Subject: RE: [PATCH v19 bpf-next 12/23] bpf: add multi-buff support to the
 bpf_xdp_adjust_tail() API

Lorenzo Bianconi wrote:
> From: Eelco Chaudron <echaudro@...hat.com>
> 
> This change adds support for tail growing and shrinking for XDP multi-buff.
> 
> When called on a multi-buffer packet with a grow request, it will work
> on the last fragment of the packet. So the maximum grow size is the
> last fragments tailroom, i.e. no new buffer will be allocated.
> A XDP mb capable driver is expected to set frag_size in xdp_rxq_info data
> structure to notify the XDP core the fragment size. frag_size set to 0 is
> interpreted by the XDP core as tail growing is not allowed.
> Introduce __xdp_rxq_info_reg utility routine to initialize frag_size field.
> 
> When shrinking, it will work from the last fragment, all the way down to
> the base buffer depending on the shrinking size. It's important to mention
> that once you shrink down the fragment(s) are freed, so you can not grow
> again to the original size.
> 
> Acked-by: Jakub Kicinski <kuba@...nel.org>
> Co-developed-by: Lorenzo Bianconi <lorenzo@...nel.org>
> Signed-off-by: Lorenzo Bianconi <lorenzo@...nel.org>
> Signed-off-by: Eelco Chaudron <echaudro@...hat.com>
> ---
>  drivers/net/ethernet/marvell/mvneta.c |  3 +-
>  include/net/xdp.h                     | 16 ++++++-
>  net/core/filter.c                     | 67 +++++++++++++++++++++++++++
>  net/core/xdp.c                        | 12 +++--
>  4 files changed, 90 insertions(+), 8 deletions(-)

Some nits and one questiopn about offset > 0 on shrink.

>  void xdp_rxq_info_unreg(struct xdp_rxq_info *xdp_rxq);
>  void xdp_rxq_info_unused(struct xdp_rxq_info *xdp_rxq);
>  bool xdp_rxq_info_is_reg(struct xdp_rxq_info *xdp_rxq);
> diff --git a/net/core/filter.c b/net/core/filter.c
> index b9bfe6fac6df..ace67957e685 100644
> --- a/net/core/filter.c
> +++ b/net/core/filter.c
> @@ -3831,11 +3831,78 @@ static const struct bpf_func_proto bpf_xdp_adjust_head_proto = {
>  	.arg2_type	= ARG_ANYTHING,
>  };
>  
> +static int bpf_xdp_mb_increase_tail(struct xdp_buff *xdp, int offset)
> +{
> +	struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
> +	skb_frag_t *frag = &sinfo->frags[sinfo->nr_frags - 1];
> +	struct xdp_rxq_info *rxq = xdp->rxq;
> +	int size, tailroom;

These could be 'unsized int'.

> +
> +	if (!rxq->frag_size || rxq->frag_size > xdp->frame_sz)
> +		return -EOPNOTSUPP;
> +
> +	tailroom = rxq->frag_size - skb_frag_size(frag) - skb_frag_off(frag);
> +	if (unlikely(offset > tailroom))
> +		return -EINVAL;
> +
> +	size = skb_frag_size(frag);
> +	memset(skb_frag_address(frag) + size, 0, offset);
> +	skb_frag_size_set(frag, size + offset);

Could probably make this a helper skb_frag_grow() or something in
skbuff.h we have sub, add, put_zero, etc. there.

> +	sinfo->xdp_frags_size += offset;
> +
> +	return 0;
> +}
> +
> +static int bpf_xdp_mb_shrink_tail(struct xdp_buff *xdp, int offset)
> +{
> +	struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
> +	int i, n_frags_free = 0, len_free = 0;
> +
> +	if (unlikely(offset > (int)xdp_get_buff_len(xdp) - ETH_HLEN))
> +		return -EINVAL;
> +
> +	for (i = sinfo->nr_frags - 1; i >= 0 && offset > 0; i--) {
> +		skb_frag_t *frag = &sinfo->frags[i];
> +		int size = skb_frag_size(frag);
> +		int shrink = min_t(int, offset, size);
> +
> +		len_free += shrink;
> +		offset -= shrink;
> +
> +		if (unlikely(size == shrink)) {

not so sure about the unlikely.

> +			struct page *page = skb_frag_page(frag);
> +
> +			__xdp_return(page_address(page), &xdp->rxq->mem,
> +				     false, NULL);
> +			n_frags_free++;
> +		} else {
> +			skb_frag_size_set(frag, size - shrink);

skb_frag_size_sub() maybe, but you need to pull out size anyways
so its not a big deal to me.

> +			break;
> +		}
> +	}
> +	sinfo->nr_frags -= n_frags_free;
> +	sinfo->xdp_frags_size -= len_free;
> +
> +	if (unlikely(offset > 0)) {

hmm whats the case for offset to != 0? Seems with initial unlikely
check and shrinking while walking backwards through the frags it
should be zero? Maybe a comment would help?

> +		xdp_buff_clear_mb(xdp);
> +		xdp->data_end -= offset;
> +	}
> +
> +	return 0;
> +}
> +
>  BPF_CALL_2(bpf_xdp_adjust_tail, struct xdp_buff *, xdp, int, offset)
>  {
>  	void *data_hard_end = xdp_data_hard_end(xdp); /* use xdp->frame_sz */
>  	void *data_end = xdp->data_end + offset;
>  
> +	if (unlikely(xdp_buff_is_mb(xdp))) { /* xdp multi-buffer */
> +		if (offset < 0)
> +			return bpf_xdp_mb_shrink_tail(xdp, -offset);
> +
> +		return bpf_xdp_mb_increase_tail(xdp, offset);
> +	}
> +
>  	/* Notice that xdp_data_hard_end have reserved some tailroom */
>  	if (unlikely(data_end > data_hard_end))
>  		return -EINVAL;

[...]

Thanks,
John

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ