[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250104174152.67e3f687@kernel.org>
Date: Sat, 4 Jan 2025 17:41:52 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: John Daley <johndale@...co.com>
Cc: benve@...co.com, satishkh@...co.com, andrew+netdev@...n.ch,
davem@...emloft.net, edumazet@...gle.com, pabeni@...hat.com,
netdev@...r.kernel.org, Nelson Escobar <neescoba@...co.com>
Subject: Re: [PATCH net-next v4 4/6] enic: Use the Page Pool API for RX when
MTU is less than page size
On Thu, 2 Jan 2025 14:24:25 -0800 John Daley wrote:
> The Page Pool API improves bandwidth and CPU overhead by recycling
> pages instead of allocating new buffers in the driver. Make use of
> page pool fragment allocation for smaller MTUs so that multiple
> packets can share a page.
Why the MTU limitation? You can set page_pool_params.order
to appropriate value always use the page pool.
> Added 'pp_alloc_error' per RQ ethtool statistic to count
> page_pool_dev_alloc() failures.
SG, but please don't report it via ethtool. Add it in
enic_get_queue_stats_rx() as alloc_fail (and enic_get_base_stats()).
As one of the benefits you'll be able to use
tools/testing/selftests/drivers/net/hw/pp_alloc_fail.py
to test this stat and error handling in the driver.
> +void enic_rq_page_cleanup(struct enic_rq *rq)
> +{
> + struct vnic_rq *vrq = &rq->vrq;
> + struct enic *enic = vnic_dev_priv(vrq->vdev);
> + struct napi_struct *napi = &enic->napi[vrq->index];
> +
> + napi_free_frags(napi);
why?
> + page_pool_destroy(rq->pool);
> +}
--
pw-bot: cr
Powered by blists - more mailing lists