[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250110235204.8536-1-johndale@cisco.com>
Date: Fri, 10 Jan 2025 15:52:04 -0800
From: John Daley <johndale@...co.com>
To: kuba@...nel.org
Cc: andrew+netdev@...n.ch,
benve@...co.com,
davem@...emloft.net,
edumazet@...gle.com,
johndale@...co.com,
neescoba@...co.com,
netdev@...r.kernel.org,
pabeni@...hat.com,
satishkh@...co.com
Subject: Re: [PATCH net-next v4 4/6] enic: Use the Page Pool API for RX when MTU is less than page size
On 1/4/25, 5:42 PM, "Jakub Kicinski" kuba@...nel.org wrote:
>On Thu, 2 Jan 2025 14:24:25 -0800 John Daley wrote:
>> The Page Pool API improves bandwidth and CPU overhead by recycling
>> pages instead of allocating new buffers in the driver. Make use of
>> page pool fragment allocation for smaller MTUs so that multiple
>> packets can share a page.
>
>Why the MTU limitation? You can set page_pool_params.order
>to appropriate value always use the page pool.
>
>> Added 'pp_alloc_error' per RQ ethtool statistic to count
>> page_pool_dev_alloc() failures.
>
>SG, but please don't report it via ethtool. Add it in
>enic_get_queue_stats_rx() as alloc_fail (and enic_get_base_stats()).
>As one of the benefits you'll be able to use
>tools/testing/selftests/drivers/net/hw/pp_alloc_fail.py
>to test this stat and error handling in the driver.
Fyi, after making suggested change I used pp_alloc_fail.py but no
errors were injected. I think the path from page_pool_dev_alloc()
does not call page_pool_alloc_pages()?
Here is what I beleive the call path is:
page_pool_dev_alloc(rq->pool, &offset, &truesize)
page_pool_alloc(pool, offset, size, gfp)
netmem_to_page(page_pool_alloc_netmem(pool, offset, size, gfp));
page_pool_alloc_frag_netmem(pool, offset, *size, gfp);
page_pool_alloc_netmems(pool, gfp);
__page_pool_alloc_pages_slow(pool, gfp);
If I change the call from page_pool_dev_alloc() to
page_pool_dev_alloc_pages() in the driver I do see the errors injected.
Powered by blists - more mailing lists