[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5ba84af2-51d1-de5d-14cc-752c08e5371f@intel.com>
Date: Fri, 28 Jul 2023 15:58:58 +0200
From: Alexander Lobakin <aleksander.lobakin@...el.com>
To: Yunsheng Lin <linyunsheng@...wei.com>
CC: "David S. Miller" <davem@...emloft.net>, Eric Dumazet
<edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>, Paolo Abeni
<pabeni@...hat.com>, Maciej Fijalkowski <maciej.fijalkowski@...el.com>,
Larysa Zaremba <larysa.zaremba@...el.com>, Alexander Duyck
<alexanderduyck@...com>, Jesper Dangaard Brouer <hawk@...nel.org>, "Ilias
Apalodimas" <ilias.apalodimas@...aro.org>, Simon Horman
<simon.horman@...igine.com>, <netdev@...r.kernel.org>,
<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH net-next 2/9] net: skbuff: don't include
<net/page_pool/types.h> to <linux/skbuff.h>
From: Yunsheng Lin <linyunsheng@...wei.com>
Date: Fri, 28 Jul 2023 20:02:51 +0800
> On 2023/7/27 22:43, Alexander Lobakin wrote:
>> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
>
> ...
>
>> +bool page_pool_return_skb_page(struct page *page, bool napi_safe)
>
> Still having the 'page_pool_' prefix seems odd here when it is in the
> skbuff.c where most have skb_ or napi_ prefix, is it better to rename
> it to something like napi_return_page_pool_page()?
Given that how the function that goes next is named, maybe
skb_pp_return_page() (or skb_pp_put_page())?
>
>> +{
>> + struct napi_struct *napi;
>> + struct page_pool *pp;
>> + bool allow_direct;
>> +
>> + page = compound_head(page);
>> +
>> + /* page->pp_magic is OR'ed with PP_SIGNATURE after the allocation
>> + * in order to preserve any existing bits, such as bit 0 for the
>> + * head page of compound page and bit 1 for pfmemalloc page, so
>> + * mask those bits for freeing side when doing below checking,
>> + * and page_is_pfmemalloc() is checked in __page_pool_put_page()
>> + * to avoid recycling the pfmemalloc page.
>> + */
>> + if (unlikely((page->pp_magic & ~0x3UL) != PP_SIGNATURE))
>> + return false;
>> +
>> + pp = page->pp;
>> +
>> + /* Allow direct recycle if we have reasons to believe that we are
>> + * in the same context as the consumer would run, so there's
>> + * no possible race.
>> + */
>> + napi = READ_ONCE(pp->p.napi);
>> + allow_direct = napi_safe && napi &&
>> + READ_ONCE(napi->list_owner) == smp_processor_id();
>> +
>> + /* Driver set this to memory recycling info. Reset it on recycle.
>> + * This will *not* work for NIC using a split-page memory model.
>> + * The page will be returned to the pool here regardless of the
>> + * 'flipped' fragment being in use or not.
>> + */
>> + page_pool_put_full_page(pp, page, allow_direct);
>> +
>> + return true;
>> +}
>> +EXPORT_SYMBOL(page_pool_return_skb_page);
>> +
>> static bool skb_pp_recycle(struct sk_buff *skb, void *data, bool napi_safe)
(this one)
>> {
>> if (!IS_ENABLED(CONFIG_PAGE_POOL) || !skb->pp_recycle)
>>
Thanks,
Olek
Powered by blists - more mailing lists