[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d1e3869f-3b9a-6395-b6cb-328ab7c26e4d@huawei.com>
Date: Thu, 1 Feb 2024 20:14:50 +0800
From: Yunsheng Lin <linyunsheng@...wei.com>
To: Lorenzo Bianconi <lorenzo@...nel.org>, Lorenzo Bianconi
<lorenzo.bianconi@...hat.com>
CC: <netdev@...r.kernel.org>, <davem@...emloft.net>, <kuba@...nel.org>,
<edumazet@...gle.com>, <pabeni@...hat.com>, <bpf@...r.kernel.org>,
<toke@...hat.com>, <willemdebruijn.kernel@...il.com>, <jasowang@...hat.com>,
<sdf@...gle.com>, <hawk@...nel.org>, <ilias.apalodimas@...aro.org>
Subject: Re: [PATCH v6 net-next 1/5] net: add generic per-cpu page_pool
allocator
On 2024/2/1 19:49, Lorenzo Bianconi wrote:
>>> On 2024/1/29 21:04, Lorenzo Bianconi wrote:
>>>>> On 2024/1/28 22:20, Lorenzo Bianconi wrote:
>>>>>
>>>>>> #ifdef CONFIG_LOCKDEP
>>>>>> /*
>>>>>> * register_netdevice() inits txq->_xmit_lock and sets lockdep class
>>>>>> @@ -11686,6 +11690,27 @@ static void __init net_dev_struct_check(void)
>>>>>> *
>>>>>> */
>>>>>>
>>>>>> +#define SD_PAGE_POOL_RING_SIZE 256
>>>>>
>>>>> I might missed that if there is a reason we choose 256 here, do we
>>>>> need to use different value for differe page size, for 64K page size,
>>>>> it means we might need to reserve 16MB memory for each CPU.
>>>>
>>>> honestly I have not spent time on it, most of the current page_pool users set
>>>> pool_size to 256. Anyway, do you mean something like:
>>>>
>>>> diff --git a/net/core/dev.c b/net/core/dev.c
>>>> index f70fb6cad2b2..3934a3fc5c45 100644
>>>> --- a/net/core/dev.c
>>>> +++ b/net/core/dev.c
>>>> @@ -11806,12 +11806,11 @@ static void __init net_dev_struct_check(void)
>>>> *
>>>> */
>>>>
>>>> -#define SD_PAGE_POOL_RING_SIZE 256
>>>> static int net_page_pool_alloc(int cpuid)
>>>> {
>>>> #if IS_ENABLED(CONFIG_PAGE_POOL)
>>>
>>> Isn't better to have a config like CONFIG_PER_CPU_PAGE_POOL to enable
>>> this feature? and this config can be selected by whoever needs this
>>> feature?
>>
>> since it will be used for generic xdp (at least) I think this will be 99%
>> enabled when we have bpf enabled, right?
>>
>>>
>>>> struct page_pool_params page_pool_params = {
>>>> - .pool_size = SD_PAGE_POOL_RING_SIZE,
>>>> + .pool_size = PAGE_SIZE < SZ_64K ? 256 : 16,
>>>
>>> What about other page size? like 16KB?
>>> How about something like below:
>>> PAGE_SIZE << get_order(PER_CPU_PAGE_POOL_MAX_SIZE)
>>
>> since pool_size is the number of elements in the ptr_ring associated to the pool,
>> assuming we want to consume PER_CPU_PAGE_POOL_MAX_SIZE for each cpu, something
>> like:
>>
>> PER_CPU_PAGE_POOL_MAX_SIZE / PAGE_SIZE
Using something like the above makes sense to me, thanks.
>>
>> Regards,
>> Lorenzo
>
> Discussing with Jesper and Toke, we agreed page_pool infrastructure will need
> a way to release memory when the system is under memory pressure, so we can
> defer this item to a subsequent series, what do you think?
>
Powered by blists - more mailing lists