[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250106215425.3108-1-johndale@cisco.com>
Date: Mon, 6 Jan 2025 13:54:25 -0800
From: John Daley <johndale@...co.com>
To: kuba@...nel.org
Cc: andrew+netdev@...n.ch,
benve@...co.com,
davem@...emloft.net,
edumazet@...gle.com,
johndale@...co.com,
neescoba@...co.com,
netdev@...r.kernel.org,
pabeni@...hat.com,
satishkh@...co.com
Subject: Re: [PATCH net-next v4 4/6] enic: Use the Page Pool API for RX when MTU is less than page size
>> The Page Pool API improves bandwidth and CPU overhead by recycling
>> pages instead of allocating new buffers in the driver. Make use of
>> page pool fragment allocation for smaller MTUs so that multiple
>> packets can share a page.
>
>Why the MTU limitation? You can set page_pool_params.order
>to appropriate value always use the page pool.
I thought it might waste memory, e.g. allocating 16K for 9000 mtu.
But now that you mention it, I see that the added code complexity is
probably not worth it. I am unclear on what to set pp_params.max_len
to when MTU > PAGE_SIZE. Order * PAGE_SIZE or MTU size? In this case
the pages won't be fragmented so isn't only necessary for the MTU sized
area to be DMA SYNC'ed?
>
>> Added 'pp_alloc_error' per RQ ethtool statistic to count
>> page_pool_dev_alloc() failures.
>
>SG, but please don't report it via ethtool. Add it in
>enic_get_queue_stats_rx() as alloc_fail (and enic_get_base_stats()).
>As one of the benefits you'll be able to use
>tools/testing/selftests/drivers/net/hw/pp_alloc_fail.py
>to test this stat and error handling in the driver.
ok, will do.
>
>> +void enic_rq_page_cleanup(struct enic_rq *rq)
>> +{
>> + struct vnic_rq *vrq = &rq->vrq;
>> + struct enic *enic = vnic_dev_priv(vrq->vdev);
>> + struct napi_struct *napi = &enic->napi[vrq->index];
>> +
>> + napi_free_frags(napi);
>
>why?
Mistake, left over from previous patch. Also, I will remove enic_rq_error_reset()
which calls napi_free_frags at a time when napi->skb is not owned by driver.
>
>> + page_pool_destroy(rq->pool);
>> +}
I will make a v5 shortly. Would you recommend I split the patchset into 2 parts
as I think @andrew+netdev was suggesting? The last 2 patches are kind of unrelated
to the first 4.
Powered by blists - more mailing lists