lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKhg4tKE3gZAL81qfdkyqWCTdPAzvFu1pZkRkPrk6B5bTn1VrQ@mail.gmail.com>
Date: Sat, 12 Aug 2023 09:52:39 +0800
From: Liang Chen <liangchen.linux@...il.com>
To: Yunsheng Lin <linyunsheng@...wei.com>
Cc: davem@...emloft.net, edumazet@...gle.com, kuba@...nel.org, 
	pabeni@...hat.com, hawk@...nel.org, ilias.apalodimas@...aro.org, 
	daniel@...earbox.net, ast@...nel.org, netdev@...r.kernel.org
Subject: Re: [RFC PATCH net-next v2 2/2] net: veth: Improving page pool pages recycling

On Wed, Aug 9, 2023 at 8:35 PM Yunsheng Lin <linyunsheng@...wei.com> wrote:
>
> On 2023/8/9 18:01, Liang Chen wrote:
> > On Tue, Aug 8, 2023 at 7:16 PM Yunsheng Lin <linyunsheng@...wei.com> wrote:
> >>
> >> On 2023/8/7 20:20, Liang Chen wrote:
> >>> On Wed, Aug 2, 2023 at 8:32 PM Yunsheng Lin <linyunsheng@...wei.com> wrote:
> >>>>
> >>>> On 2023/8/1 14:19, Liang Chen wrote:
> >>>>
> >>>>> @@ -862,9 +865,18 @@ static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq,
> >>>>>       case XDP_PASS:
> >>>>>               break;
> >>>>>       case XDP_TX:
> >>>>> -             veth_xdp_get(xdp);
> >>>>> -             consume_skb(skb);
> >>>>> -             xdp->rxq->mem = rq->xdp_mem;
> >>>>> +             if (skb != skb_orig) {
> >>>>> +                     xdp->rxq->mem = rq->xdp_mem_pp;
> >>>>> +                     kfree_skb_partial(skb, true);
> >>>>
> >>>> For this case, I suppose that we can safely call kfree_skb_partial()
> >>>> as we allocate the skb in the veth_convert_skb_to_xdp_buff(), but
> >>>> I am not sure about the !skb->pp_recycle case.
> >>>>
> >>>>> +             } else if (!skb->pp_recycle) {
> >>>>> +                     xdp->rxq->mem = rq->xdp_mem;
> >>>>> +                     kfree_skb_partial(skb, true);
> >>>>
> >>>> For consume_skb(), there is skb_unref() checking and other checking/operation.
> >>>> Can we really assume that we can call kfree_skb_partial() with head_stolen
> >>>> being true? Is it possible that skb->users is bigger than 1? If it is possible,
> >>>> don't we free the 'skb' back to skbuff_cache when other may still be using
> >>>> it?
> >>>>
> >>>
> >>> Thanks for raising the concern. If there are multiple references to
> >>> the skb (skb->users is greater than 1), the skb will be reallocated in
> >>> veth_convert_skb_to_xdp_buff(). So it should enter the skb != skb_orig
> >>> case.
> >>>
> >>> In fact, entering the !skb->pp_recycle case implies that the skb meets
> >>> the following conditions:
> >>> 1. It is neither shared nor cloned.
> >>> 2. It is not allocated using kmalloc.
> >>> 3. It does not have fragment data.
> >>> 4. The headroom of the skb is greater than XDP_PACKET_HEADROOM.
> >>>
> >>
> >> You are right, I missed the checking in veth_convert_skb_to_xdp_buff(),
> >> it seems the xdp is pretty strict about the buffer owner, it need to
> >> have exclusive access to all the buffer.
> >>
> >> And it seems there is only one difference left then, with
> >> kfree_skb_partial() calling 'kmem_cache_free(skbuff_cache, skb)' and
> >> consume_skb() calling 'kfree_skbmem(skb)'. If we are true about
> >> 'skb' only allocated from 'skbuff_cache', this patch looks good to me
> >> then.
> >>
> >
> > The difference between kmem_cache_free and kfree_skbmem lies in the
> > fact that kfree_skbmem checks whether the skb is an fclone (fast
> > clone) skb. If it is, it should be returned to the
> > skbuff_fclone_cache. Currently, fclone skbs can only be allocated
> > through __alloc_skb, and their head buffer is allocated by
> > kmalloc_reserve, which does not meet the condition mentioned above -
> > "2. It is not allocated using kmalloc.". Therefore, the fclone skb
> > will still be reallocated by veth_convert_skb_to_xdp_buff, leading to
> > the skb != skb_orig case. In other words, entering the
> > !skb->pp_recycle case indicates that the skb was allocated from
> > skbuff_cache.
>
> It might need some comment to make it clear or add some compile testing
> such as BUILD_BUG_ON() to ensure that, as it is not so obvious if
> someone change it to allocate a fclone skb with a frag head data in
> the future.
>

Sure. We will add a comment to explain that like below
/*
 * We can safely use kfree_skb_partial here because this cannot be an
fclone skb.
 * Fclone skbs are exclusively allocated via __alloc_skb, with their head buffer
 * allocated by kmalloc_reserve (so, skb->head_frag = 0), satisfying the
 * skb_head_is_locked condition in veth_convert_skb_to_xdp_buff, leading to skb
 * being reallocated.
 */

> Also I suppose the veth_xdp_rcv_skb() is called in NAPI context, and
> we might be able to reuse the 'skb' if we can use something like
> napi_skb_free_stolen_head().

Sure. Using napi_skb_free_stolen_head seems to be a good idea, it
further accelerates the skb != skb_orig case in our primitive test.
However, it's not suitable for the !skb->pp_recycle case, as the skb
isn't allocated in the current NAPI context.

Thanks,
Liang

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ