[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220520101812.GW3441@techsingularity.net>
Date: Fri, 20 May 2022 11:18:12 +0100
From: Mel Gorman <mgorman@...hsingularity.net>
To: K Prateek Nayak <kprateek.nayak@....com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Valentin Schneider <valentin.schneider@....com>,
Aubrey Li <aubrey.li@...ux.intel.com>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 0/4] Mitigate inconsistent NUMA imbalance behaviour
On Fri, May 20, 2022 at 10:28:02AM +0530, K Prateek Nayak wrote:
> Hello Mel,
>
> We tested the patch series on a our systems.
>
> tl;dr
>
> Results of testing:
> - Benefits short running Stream tasks in NPS2 and NPS4 mode.
> - Benefits seen for tbench in NPS1 mode for 8-128 worker count.
> - Regression in Hackbench with 16 groups in NPS1 mode. A rerun for
> same data point suggested run to run variation on patched kernel.
> - Regression in case of tbench with 32 and 64 workers in NPS2 mode.
> Patched kernel however seems to report more stable value for 64
> worker count compared to tip.
> - Slight regression in schbench in NPS2 and NPS4 mode for large
> worker count but we did spot some run to run variation with
> both tip and patched kernel.
>
> Below are all the detailed numbers for the benchmarks.
>
Thanks!
I looked through the results but I do not see anything that is very
alarming. Some notes.
o Hackbench with 16 groups on NPS1, that would likely be 640 tasks
communicating unless other paramters are used. I expect it to be
variable and it's a heavily overloaded scenario. Initial placement is
not necessarily critical as migrations are likely to be very high.
On NPS1, there is going to be random luck given that the latency
to individual CPUs and the physical topology is hidden.
o NPS2 with 128 workers. That's at the threshold where load is
potentially evenly split between the two sockets but not perfectly
split due to migrate-on-wakeup being a little unpredictable. Might
be worth checking the variability there.
o Same observations for tbench. I looked at my own results for NPS1
on Zen3 and what I see is that there is a small blip there but
the mpstat heat map indicates that the nodes are being more evenly
used than without the patch which is expected.
o STREAM is interesting in that there are large differences between
10 runs and 100 hundred runs. In indicates that without pinning that
STREAM can be a bit variable. The problem might be similar to NAS
as reported in the leader mail with the variability due to commit
c6f886546cb8 for unknown reasons.
> >
> > kernel/sched/fair.c | 59 ++++++++++++++++++++++++++---------------
> > kernel/sched/topology.c | 23 ++++++++++------
> > 2 files changed, 53 insertions(+), 29 deletions(-)
> >
>
> Please let me know if you would like me to get some additional
> data on the test system.
Other than checking variability, the min, max and range, I don't need
additional data. I suspect in some cases like what I observed with NAS
that there is wide variability for reasons independent of this series.
I'm of the opinion though that your results are not a barrier for
merging. Do you agree?
--
Mel Gorman
SUSE Labs
Powered by blists - more mailing lists