[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7d159ee4-7361-c04a-681e-1afc74765c5b@kernel.org>
Date: Fri, 4 Aug 2023 12:49:37 +0200
From: Jesper Dangaard Brouer <hawk@...nel.org>
To: Jesse Brandeburg <jesse.brandeburg@...el.com>,
Haiyang Zhang <haiyangz@...rosoft.com>, linux-hyperv@...r.kernel.org,
netdev@...r.kernel.org
Cc: decui@...rosoft.com, kys@...rosoft.com, paulros@...rosoft.com,
olaf@...fle.de, vkuznets@...hat.com, davem@...emloft.net,
wei.liu@...nel.org, edumazet@...gle.com, kuba@...nel.org, pabeni@...hat.com,
leon@...nel.org, longli@...rosoft.com, ssengar@...ux.microsoft.com,
linux-rdma@...r.kernel.org, daniel@...earbox.net, john.fastabend@...il.com,
bpf@...r.kernel.org, ast@...nel.org, sharmaajay@...rosoft.com,
hawk@...nel.org, tglx@...utronix.de, shradhagupta@...ux.microsoft.com,
linux-kernel@...r.kernel.org, Ilias Apalodimas <ilias.apalodimas@...aro.org>
Subject: Re: [PATCH V5,net-next] net: mana: Add page pool for RX buffers
On 03/08/2023 03.44, Jesse Brandeburg wrote:
> On 8/2/2023 11:07 AM, Haiyang Zhang wrote:
>> Add page pool for RX buffers for faster buffer cycle and reduce CPU
>> usage.
>>
Can you add some info on the performance improvement this patch gives?
Your previous post mentioned:
> With iperf and 128 threads test, this patch improved the throughput
by 12-15%, and decreased the IRQ associated CPU's usage from 99-100% to
10-50%.
>> The standard page pool API is used.
>>
>> Signed-off-by: Haiyang Zhang <haiyangz@...rosoft.com>
>> ---
>> V5:
>> In err path, set page_pool_put_full_page(..., false) as suggested by
>> Jakub Kicinski
>> V4:
>> Add nid setting, remove page_pool_nid_changed(), as suggested by
>> Jesper Dangaard Brouer
>> V3:
>> Update xdp mem model, pool param, alloc as suggested by Jakub Kicinski
>> V2:
>> Use the standard page pool API as suggested by Jesper Dangaard Brouer
>> ---
>
>> diff --git a/include/net/mana/mana.h b/include/net/mana/mana.h
>> index 024ad8ddb27e..b12859511839 100644
>> --- a/include/net/mana/mana.h
>> +++ b/include/net/mana/mana.h
>> @@ -280,6 +280,7 @@ struct mana_recv_buf_oob {
>> struct gdma_wqe_request wqe_req;
>>
>> void *buf_va;
>> + bool from_pool; /* allocated from a page pool */
>
> suggest you use flags and not bools, as bools waste 7 bits each, plus
> your packing of this struct will be full of holes, made worse by this
> patch. (see pahole tool)
>
Agreed.
>
>>
>> /* SGL of the buffer going to be sent has part of the work request. */
>> u32 num_sge;
>> @@ -330,6 +331,8 @@ struct mana_rxq {
>> bool xdp_flush;
>> int xdp_rc; /* XDP redirect return code */
>>
>> + struct page_pool *page_pool;
>> +
>> /* MUST BE THE LAST MEMBER:
>> * Each receive buffer has an associated mana_recv_buf_oob.
>> */
>
>
> The rest of the patch looks ok and is remarkably compact for a
> conversion to page pool. I'd prefer someone with more page pool exposure
> review this for correctness, but FWIW
>
Both Jakub and I have reviewed the page_pool parts, and I think we are
in a good place.
Looking at the driver, I wonder why you are keeping the driver local
memory cache (when PP is also contains a memory cache) ?
(I assume there is a good reason, so this is not blocking patch)
>
> Reviewed-by: Jesse Brandeburg <jesse.brandeburg@...el.com>
Thanks for taking your time to review.
I'm ready to ACK once the description is improved a bit :-)
--Jesper
pw-bot: cr
Powered by blists - more mailing lists