[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID:
<PH7PR21MB3116E2C35824D8F673363B36CA09A@PH7PR21MB3116.namprd21.prod.outlook.com>
Date: Fri, 4 Aug 2023 20:20:53 +0000
From: Haiyang Zhang <haiyangz@...rosoft.com>
To: Jesper Dangaard Brouer <hawk@...nel.org>, Jesse Brandeburg
<jesse.brandeburg@...el.com>, "linux-hyperv@...r.kernel.org"
<linux-hyperv@...r.kernel.org>, "netdev@...r.kernel.org"
<netdev@...r.kernel.org>
CC: Dexuan Cui <decui@...rosoft.com>, KY Srinivasan <kys@...rosoft.com>, Paul
Rosswurm <paulros@...rosoft.com>, "olaf@...fle.de" <olaf@...fle.de>, vkuznets
<vkuznets@...hat.com>, "davem@...emloft.net" <davem@...emloft.net>,
"wei.liu@...nel.org" <wei.liu@...nel.org>, "edumazet@...gle.com"
<edumazet@...gle.com>, "kuba@...nel.org" <kuba@...nel.org>,
"pabeni@...hat.com" <pabeni@...hat.com>, "leon@...nel.org" <leon@...nel.org>,
Long Li <longli@...rosoft.com>, "ssengar@...ux.microsoft.com"
<ssengar@...ux.microsoft.com>, "linux-rdma@...r.kernel.org"
<linux-rdma@...r.kernel.org>, "daniel@...earbox.net" <daniel@...earbox.net>,
"john.fastabend@...il.com" <john.fastabend@...il.com>, "bpf@...r.kernel.org"
<bpf@...r.kernel.org>, "ast@...nel.org" <ast@...nel.org>, Ajay Sharma
<sharmaajay@...rosoft.com>, "tglx@...utronix.de" <tglx@...utronix.de>,
"shradhagupta@...ux.microsoft.com" <shradhagupta@...ux.microsoft.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, Ilias
Apalodimas <ilias.apalodimas@...aro.org>
Subject: RE: [PATCH V5,net-next] net: mana: Add page pool for RX buffers
> -----Original Message-----
> From: Jesper Dangaard Brouer <hawk@...nel.org>
> Sent: Friday, August 4, 2023 6:50 AM
> To: Jesse Brandeburg <jesse.brandeburg@...el.com>; Haiyang Zhang
> <haiyangz@...rosoft.com>; linux-hyperv@...r.kernel.org;
> netdev@...r.kernel.org
> Cc: Dexuan Cui <decui@...rosoft.com>; KY Srinivasan <kys@...rosoft.com>;
> Paul Rosswurm <paulros@...rosoft.com>; olaf@...fle.de; vkuznets
> <vkuznets@...hat.com>; davem@...emloft.net; wei.liu@...nel.org;
> edumazet@...gle.com; kuba@...nel.org; pabeni@...hat.com;
> leon@...nel.org; Long Li <longli@...rosoft.com>;
> ssengar@...ux.microsoft.com; linux-rdma@...r.kernel.org;
> daniel@...earbox.net; john.fastabend@...il.com; bpf@...r.kernel.org;
> ast@...nel.org; Ajay Sharma <sharmaajay@...rosoft.com>; hawk@...nel.org;
> tglx@...utronix.de; shradhagupta@...ux.microsoft.com; linux-
> kernel@...r.kernel.org; Ilias Apalodimas <ilias.apalodimas@...aro.org>
> Subject: Re: [PATCH V5,net-next] net: mana: Add page pool for RX buffers
>
>
>
> On 03/08/2023 03.44, Jesse Brandeburg wrote:
> > On 8/2/2023 11:07 AM, Haiyang Zhang wrote:
> >> Add page pool for RX buffers for faster buffer cycle and reduce CPU
> >> usage.
> >>
>
> Can you add some info on the performance improvement this patch gives?
I will add this to the patch description.
>
> Your previous post mentioned:
> > With iperf and 128 threads test, this patch improved the throughput
> by 12-15%, and decreased the IRQ associated CPU's usage from 99-100% to
> 10-50%.
>
>
> >> The standard page pool API is used.
> >>
> >> Signed-off-by: Haiyang Zhang <haiyangz@...rosoft.com>
> >> ---
> >> V5:
> >> In err path, set page_pool_put_full_page(..., false) as suggested by
> >> Jakub Kicinski
> >> V4:
> >> Add nid setting, remove page_pool_nid_changed(), as suggested by
> >> Jesper Dangaard Brouer
> >> V3:
> >> Update xdp mem model, pool param, alloc as suggested by Jakub Kicinski
> >> V2:
> >> Use the standard page pool API as suggested by Jesper Dangaard Brouer
> >> ---
> >
> >> diff --git a/include/net/mana/mana.h b/include/net/mana/mana.h
> >> index 024ad8ddb27e..b12859511839 100644
> >> --- a/include/net/mana/mana.h
> >> +++ b/include/net/mana/mana.h
> >> @@ -280,6 +280,7 @@ struct mana_recv_buf_oob {
> >> struct gdma_wqe_request wqe_req;
> >>
> >> void *buf_va;
> >> + bool from_pool; /* allocated from a page pool */
> >
> > suggest you use flags and not bools, as bools waste 7 bits each, plus
> > your packing of this struct will be full of holes, made worse by this
> > patch. (see pahole tool)
> >
>
> Agreed.
Thanks, I will do this when adding more flags in future patches.
>
> >
> >>
> >> /* SGL of the buffer going to be sent has part of the work request. */
> >> u32 num_sge;
> >> @@ -330,6 +331,8 @@ struct mana_rxq {
> >> bool xdp_flush;
> >> int xdp_rc; /* XDP redirect return code */
> >>
> >> + struct page_pool *page_pool;
> >> +
> >> /* MUST BE THE LAST MEMBER:
> >> * Each receive buffer has an associated mana_recv_buf_oob.
> >> */
> >
> >
> > The rest of the patch looks ok and is remarkably compact for a
> > conversion to page pool. I'd prefer someone with more page pool exposure
> > review this for correctness, but FWIW
> >
>
> Both Jakub and I have reviewed the page_pool parts, and I think we are
> in a good place.
>
> Looking at the driver, I wonder why you are keeping the driver local
> memory cache (when PP is also contains a memory cache) ?
> (I assume there is a good reason, so this is not blocking patch)
You mean the "else" part of the code below? It's for a compound page from
napi_alloc_frag(), not from the pool. If any error happens, we save it
locally for re-use.
drop:
if (from_pool) {
page_pool_recycle_direct(rxq->page_pool,
virt_to_head_page(buf_va));
} else {
WARN_ON_ONCE(rxq->xdp_save_va);
/* Save for reuse */
rxq->xdp_save_va = buf_va;
}
>
> >
> > Reviewed-by: Jesse Brandeburg <jesse.brandeburg@...el.com>
>
> Thanks for taking your time to review.
>
> I'm ready to ACK once the description is improved a bit :-)
>
> --Jesper
> pw-bot: cr
Powered by blists - more mailing lists