[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <079a0315-efea-9221-8538-47decf263684@huawei.com>
Date: Fri, 13 Dec 2019 14:53:37 +0800
From: Yunsheng Lin <linyunsheng@...wei.com>
To: "Li,Rongqing" <lirongqing@...du.com>,
Jesper Dangaard Brouer <brouer@...hat.com>
CC: Saeed Mahameed <saeedm@...lanox.com>,
"ilias.apalodimas@...aro.org" <ilias.apalodimas@...aro.org>,
"jonathan.lemon@...il.com" <jonathan.lemon@...il.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"mhocko@...nel.org" <mhocko@...nel.org>,
"peterz@...radead.org" <peterz@...radead.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
"bhelgaas@...gle.com" <bhelgaas@...gle.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Björn Töpel <bjorn.topel@...el.com>
Subject: Re: 答复: [PATCH][v2] page_pool: handle page recycle for NUMA_NO_NODE condition
On 2019/12/13 14:27, Li,Rongqing wrote:
>>
>> It is good to allocate the rx page close to both cpu and device, but if
>> both goal can not be reached, maybe we choose to allocate page that close
>> to cpu?
>>
> I think it is true
>
> If it is true, , we can remove pool->p.nid, and replace alloc_pages_node with
> alloc_pages in __page_pool_alloc_pages_slow, and change pool_page_reusable as
> that page_to_nid(page) is checked with numa_mem_id()
>
> since alloc_pages hint to use the current node page, and __page_pool_alloc_pages_slow
> will be called in NAPI polling often if recycle failed, after some cycle, the page will be from
> local memory node.
Yes if allocation and recycling are in the same NAPI polling context.
As pointed out by Saeed and Ilias, the allocation and recycling seems to
may not be happening in the same NAPI polling context, see:
"In the current code base if they are only called under NAPI this might be true.
On the page_pool skb recycling patches though (yes we'll eventually send those
:)) this is called from kfree_skb()."
So there may need some additionl attention.
>
> -Li
>
Powered by blists - more mailing lists