[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <85c64263-238e-7036-2574-efd0f5d4848b@huawei.com>
Date: Wed, 9 Aug 2023 20:35:46 +0800
From: Yunsheng Lin <linyunsheng@...wei.com>
To: Liang Chen <liangchen.linux@...il.com>
CC: <davem@...emloft.net>, <edumazet@...gle.com>, <kuba@...nel.org>,
<pabeni@...hat.com>, <hawk@...nel.org>, <ilias.apalodimas@...aro.org>,
<daniel@...earbox.net>, <ast@...nel.org>, <netdev@...r.kernel.org>
Subject: Re: [RFC PATCH net-next v2 2/2] net: veth: Improving page pool pages
recycling
On 2023/8/9 18:01, Liang Chen wrote:
> On Tue, Aug 8, 2023 at 7:16 PM Yunsheng Lin <linyunsheng@...wei.com> wrote:
>>
>> On 2023/8/7 20:20, Liang Chen wrote:
>>> On Wed, Aug 2, 2023 at 8:32 PM Yunsheng Lin <linyunsheng@...wei.com> wrote:
>>>>
>>>> On 2023/8/1 14:19, Liang Chen wrote:
>>>>
>>>>> @@ -862,9 +865,18 @@ static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq,
>>>>> case XDP_PASS:
>>>>> break;
>>>>> case XDP_TX:
>>>>> - veth_xdp_get(xdp);
>>>>> - consume_skb(skb);
>>>>> - xdp->rxq->mem = rq->xdp_mem;
>>>>> + if (skb != skb_orig) {
>>>>> + xdp->rxq->mem = rq->xdp_mem_pp;
>>>>> + kfree_skb_partial(skb, true);
>>>>
>>>> For this case, I suppose that we can safely call kfree_skb_partial()
>>>> as we allocate the skb in the veth_convert_skb_to_xdp_buff(), but
>>>> I am not sure about the !skb->pp_recycle case.
>>>>
>>>>> + } else if (!skb->pp_recycle) {
>>>>> + xdp->rxq->mem = rq->xdp_mem;
>>>>> + kfree_skb_partial(skb, true);
>>>>
>>>> For consume_skb(), there is skb_unref() checking and other checking/operation.
>>>> Can we really assume that we can call kfree_skb_partial() with head_stolen
>>>> being true? Is it possible that skb->users is bigger than 1? If it is possible,
>>>> don't we free the 'skb' back to skbuff_cache when other may still be using
>>>> it?
>>>>
>>>
>>> Thanks for raising the concern. If there are multiple references to
>>> the skb (skb->users is greater than 1), the skb will be reallocated in
>>> veth_convert_skb_to_xdp_buff(). So it should enter the skb != skb_orig
>>> case.
>>>
>>> In fact, entering the !skb->pp_recycle case implies that the skb meets
>>> the following conditions:
>>> 1. It is neither shared nor cloned.
>>> 2. It is not allocated using kmalloc.
>>> 3. It does not have fragment data.
>>> 4. The headroom of the skb is greater than XDP_PACKET_HEADROOM.
>>>
>>
>> You are right, I missed the checking in veth_convert_skb_to_xdp_buff(),
>> it seems the xdp is pretty strict about the buffer owner, it need to
>> have exclusive access to all the buffer.
>>
>> And it seems there is only one difference left then, with
>> kfree_skb_partial() calling 'kmem_cache_free(skbuff_cache, skb)' and
>> consume_skb() calling 'kfree_skbmem(skb)'. If we are true about
>> 'skb' only allocated from 'skbuff_cache', this patch looks good to me
>> then.
>>
>
> The difference between kmem_cache_free and kfree_skbmem lies in the
> fact that kfree_skbmem checks whether the skb is an fclone (fast
> clone) skb. If it is, it should be returned to the
> skbuff_fclone_cache. Currently, fclone skbs can only be allocated
> through __alloc_skb, and their head buffer is allocated by
> kmalloc_reserve, which does not meet the condition mentioned above -
> "2. It is not allocated using kmalloc.". Therefore, the fclone skb
> will still be reallocated by veth_convert_skb_to_xdp_buff, leading to
> the skb != skb_orig case. In other words, entering the
> !skb->pp_recycle case indicates that the skb was allocated from
> skbuff_cache.
It might need some comment to make it clear or add some compile testing
such as BUILD_BUG_ON() to ensure that, as it is not so obvious if
someone change it to allocate a fclone skb with a frag head data in
the future.
Also I suppose the veth_xdp_rcv_skb() is called in NAPI context, and
we might be able to reuse the 'skb' if we can use something like
napi_skb_free_stolen_head().
Powered by blists - more mailing lists