[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <558849ff-6b68-7547-cf99-36801ff24c25@huawei.com>
Date: Tue, 11 Jul 2023 19:47:14 +0800
From: Yunsheng Lin <linyunsheng@...wei.com>
To: Alexander Lobakin <aleksander.lobakin@...el.com>,
Yunsheng Lin <yunshenglin0825@...il.com>
CC: "David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
Maciej Fijalkowski <maciej.fijalkowski@...el.com>,
Michal Kubiak <michal.kubiak@...el.com>,
Larysa Zaremba <larysa.zaremba@...el.com>,
Alexander Duyck <alexanderduyck@...com>,
David Christensen <drc@...ux.vnet.ibm.com>,
Jesper Dangaard Brouer <hawk@...nel.org>,
Ilias Apalodimas <ilias.apalodimas@...aro.org>,
Paul Menzel <pmenzel@...gen.mpg.de>, <netdev@...r.kernel.org>,
<intel-wired-lan@...ts.osuosl.org>, <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH RFC net-next v4 6/9] iavf: switch to Page Pool
On 2023/7/10 21:34, Alexander Lobakin wrote:
> From: Yunsheng Lin <yunshenglin0825@...il.com>
> Date: Sun, 9 Jul 2023 13:16:39 +0800
>
>> On 2023/7/7 0:38, Alexander Lobakin wrote:
>>
>> ...
>>
>>>>
>>>>> /**
>>>>> @@ -766,13 +742,19 @@ void iavf_free_rx_resources(struct iavf_ring *rx_ring)
>>>>> **/
>>>>> int iavf_setup_rx_descriptors(struct iavf_ring *rx_ring)
>>>>> {
>>>>> - struct device *dev = rx_ring->dev;
>>>>> - int bi_size;
>>>>> + struct page_pool *pool;
>>>>> +
>>>>> + pool = libie_rx_page_pool_create(&rx_ring->q_vector->napi,
>>>>> + rx_ring->count);
>>>>
>>>> If a page is able to be spilt between more than one desc, perhaps the
>>>> prt_ring size does not need to be as big as rx_ring->count.
>>>
>>> But we doesn't know in advance, right? Esp. given that it's hidden in
>>> the lib. But anyway, you can only assume that in regular cases if you
>>> always allocate frags of the same size, PP will split pages when 2+
>>> frags can fit there or return the whole page otherwise, but who knows
>>> what might happen.
>>
>> It seems intel driver is able to know the size of memory it needs when
>> creating the ring/queue/napi/pp, maybe the driver only tell the libie
>> how many descs does it use for queue, and libie can adjust it accordingly?
>
> But libie can't say for sure how PP will split pages for it, right?
>
>>
>>> BTW, with recent recycling optimization, most of recycling is done
>>> directly through cache, not ptr_ring. So I'd even say it's safe to start
>>> creating smaller ptr_rings in the drivers.
>>
>> The problem is that we may use more memory than before for certain case
>> if we don't limit the size of ptr_ring, unless we can ensure all of
>> recycling is done directly through cache, not ptr_ring.
> Also not sure I'm following =\
Before adding page pool support, the max memory used in the driver is as
below:
rx_ring->count * PAGE_SIZE;
After adding page pool support, the max memory used in the driver is as
below:
ptr_ring->size * PAGE_SIZE +
PP_ALLOC_CACHE_SIZE * PAGE_SIZE +
rx_ring->count * PAGE_SIZE / pp.init_arg
>
> [...]
>
> Thanks,
> Olek
>
> .
>
Powered by blists - more mailing lists