[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b2779d81-4cb3-5ccc-8e36-02cd633383f3@huawei.com>
Date: Thu, 23 Sep 2021 19:24:24 +0800
From: Yunsheng Lin <linyunsheng@...wei.com>
To: Ilias Apalodimas <ilias.apalodimas@...aro.org>
CC: <davem@...emloft.net>, <kuba@...nel.org>, <netdev@...r.kernel.org>,
<linux-kernel@...r.kernel.org>, <linuxarm@...neuler.org>,
<hawk@...nel.org>, <jonathan.lemon@...il.com>, <alobakin@...me>,
<willemb@...gle.com>, <cong.wang@...edance.com>,
<pabeni@...hat.com>, <haokexin@...il.com>, <nogikh@...gle.com>,
<elver@...gle.com>, <memxor@...il.com>, <edumazet@...gle.com>,
<alexander.duyck@...il.com>, <dsahern@...il.com>
Subject: Re: [PATCH net-next 3/7] pool_pool: avoid calling compound_head() for
skb frag page
On 2021/9/23 16:33, Ilias Apalodimas wrote:
> On Wed, Sep 22, 2021 at 05:41:27PM +0800, Yunsheng Lin wrote:
>> As the pp page for a skb frag is always a head page, so make
>> sure skb_pp_recycle() passes a head page to avoid calling
>> compound_head() for skb frag page case.
>
> Doesn't that rely on the driver mostly (i.e what's passed in skb_frag_set_page() ?
> None of the current netstack code assumes bv_page is the head page of a
> compound page. Since our page_pool allocator can will allocate compound
> pages for order > 0, why should we rely on it ?
As the page pool alloc function return 'struct page *' to the caller, which
is the head page of a compound pages for order > 0, so I assume the caller
will pass that to skb_frag_set_page().
For non-pp page, I assume it is ok whether the page is a head page or tail
page, as the pp_magic for both of them are not set with PP_SIGNATURE.
Or should we play safe here, and do the trick as skb_free_head() does in
patch 6?
>
> Thanks
> /Ilias
>>
>> Signed-off-by: Yunsheng Lin <linyunsheng@...wei.com>
>> ---
>> include/linux/skbuff.h | 2 +-
>> net/core/page_pool.c | 2 --
>> 2 files changed, 1 insertion(+), 3 deletions(-)
>>
>> diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
>> index 6bdb0db3e825..35eebc2310a5 100644
>> --- a/include/linux/skbuff.h
>> +++ b/include/linux/skbuff.h
>> @@ -4722,7 +4722,7 @@ static inline bool skb_pp_recycle(struct sk_buff *skb, void *data)
>> {
>> if (!IS_ENABLED(CONFIG_PAGE_POOL) || !skb->pp_recycle)
>> return false;
>> - return page_pool_return_skb_page(virt_to_page(data));
>> + return page_pool_return_skb_page(virt_to_head_page(data));
>> }
>>
>> #endif /* __KERNEL__ */
>> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
>> index f7e71dcb6a2e..357fb53343a0 100644
>> --- a/net/core/page_pool.c
>> +++ b/net/core/page_pool.c
>> @@ -742,8 +742,6 @@ bool page_pool_return_skb_page(struct page *page)
>> {
>> struct page_pool *pp;
>>
>> - page = compound_head(page);
>> -
>> /* page->pp_magic is OR'ed with PP_SIGNATURE after the allocation
>> * in order to preserve any existing bits, such as bit 0 for the
>> * head page of compound page and bit 1 for pfmemalloc page, so
>> --
>> 2.33.0
>>
> .
>
Powered by blists - more mailing lists