[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5a432a0c18719adcfe4768e1c541010a8c22ea11.camel@redhat.com>
Date: Tue, 30 May 2023 12:04:36 +0200
From: Paolo Abeni <pabeni@...hat.com>
To: Haiyang Zhang <haiyangz@...rosoft.com>, linux-hyperv@...r.kernel.org,
netdev@...r.kernel.org
Cc: decui@...rosoft.com, kys@...rosoft.com, paulros@...rosoft.com,
olaf@...fle.de, vkuznets@...hat.com, davem@...emloft.net,
wei.liu@...nel.org, edumazet@...gle.com, kuba@...nel.org, leon@...nel.org,
longli@...rosoft.com, ssengar@...ux.microsoft.com,
linux-rdma@...r.kernel.org, daniel@...earbox.net,
john.fastabend@...il.com, bpf@...r.kernel.org, ast@...nel.org,
sharmaajay@...rosoft.com, hawk@...nel.org, tglx@...utronix.de,
shradhagupta@...ux.microsoft.com, linux-kernel@...r.kernel.org,
stable@...r.kernel.org
Subject: Re: [PATCH V3,net] net: mana: Fix perf regression: remove rx_cqes,
tx_cqes counters
On Fri, 2023-05-26 at 08:38 -0700, Haiyang Zhang wrote:
> The apc->eth_stats.rx_cqes is one per NIC (vport), and it's on the
> frequent and parallel code path of all queues. So, r/w into this
> single shared variable by many threads on different CPUs creates a
> lot caching and memory overhead, hence perf regression. And, it's
> not accurate due to the high volume concurrent r/w.
>
> For example, a workload is iperf with 128 threads, and with RPS
> enabled. We saw perf regression of 25% with the previous patch
> adding the counters. And this patch eliminates the regression.
>
> Since the error path of mana_poll_rx_cq() already has warnings, so
> keeping the counter and convert it to a per-queue variable is not
> necessary. So, just remove this counter from this high frequency
> code path.
>
> Also, remove the tx_cqes counter for the same reason. We have
> warnings & other counters for errors on that path, and don't need
> to count every normal cqe processing.
FTR, if in future you will need the above counters again, you could re-
add them using per-cpu variables to avoid re-introducing the regression
addressed here.
Cheers,
Paolo
Powered by blists - more mailing lists