[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180907125649.GA3995@linux.vnet.ibm.com>
Date: Fri, 7 Sep 2018 18:26:49 +0530
From: Srikar Dronamraju <srikar@...ux.vnet.ibm.com>
To: Jirka Hladky <jhladky@...hat.com>
Cc: Mel Gorman <mgorman@...hsingularity.net>,
Jakub Ra??ek <jracek@...hat.com>,
linux-kernel <linux-kernel@...r.kernel.org>,
"kkolakow@...hat.com" <kkolakow@...hat.com>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>
Subject: Re: [SCHEDULER] Performance drop in 4.19 compared to 4.18 kernel
* Jirka Hladky <jhladky@...hat.com> [2018-09-07 11:34:49]:
Hi Jirka,
>
> We have detected a significant performance drop (20% and more) with
> 4.19rc1 relatively to 4.18 vanilla. We see the regression on different
> 2 NUMA and 4 NUMA boxes with pretty much all the benchmarks we use -
> NAS, Stream, SPECjbb2005, SPECjvm2008.
>
Do you run single instance of these benchmarks?
I generally run specjbb2005 (single and multi instance). I should be able to
run stream. I have tried running NAS but I couldnt set it up properly.
I also run a set of perf bench scripts but thats not a real workload.
However perf bench gives me a visual perspective of how things are
converging. I also run an internal benchmark that mimics a trading
application.
> Mel Gorman has suggested checking
> 2d4056fafa196e1ab4e7161bae4df76f9602d56d commit - with it reverted we
> got some performance back but not entirely:
>
> * Compared to 4.18, there is still performance regression -
> especially with NAS (sp_C_x subtest) and SPECjvm2008. On 4 NUMA
> systems, regression is around 10-15%
> * Compared to 4.19rc1 there is a clear gain across all benchmarks, up to 20%.
>
> We are investigating the issue further, Mel has suggested to check
> 305c1fac3225dfa7eeb89bfe91b7335a6edd5172 as next.
Can you please pick
1. 69bb3230297e881c797bbc4b3dbf73514078bc9d sched/numa: Stop multiple tasks
from moving to the cpu at the same time
2. dc62cfdac5e5b7a61cd8a2bd4190e80b9bb408fc sched/numa: Avoid task migration
for small numa improvement
3. 76e18a67cdd9e3609716c8a074c03168734736f9 sched/numa: Pass destination cpu as
a parameter to migrate_task_rq
4. 489c19b440ebdbabffe530b9a41389d0a8b315d9 sched/numa: Reset scan rate
whenever task moves across nodes
5. b7e9ae1ae3825f35cd0f38f1f0c8e91ea145bc30 sched/numa: Limit the
conditions where scan period is reset
from https://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git/commit/kernel/sched
You may also want to try with reverting
f03bb6760b8e5e2bcecc88d2a2ef41c09adcab39 sched/numa: Use task faults only if
numa_group is not yet set
> I want to discuss with you how can we collaborate on performance
> testing for the upstream kernel. Does your testing show as well
> performance drop in 4.19? If so, do you have any plans for the fix? If
> no, can we send you some more information about our tests so that you
> can try to reproduce it?
While I have not kept record of the performance numbers on the upstream
kernel, I have some rough patches on scheduler from a performance point of
view. I will try to clean up and send them out soon. (Will copy you when
sending them out).
>
> We would also be more than happy to test the new patches for the
> performance - please let us know if you are interested. We have a
> pool of 1 NUMA up to 8 NUMA boxes for that, both AMD and Intel,
> covering different CPU generations from Sandy Bridge till Skylake.
>
I generally test on Power8 (4 node, 16 node), 2 node Power 9, 2 node skylake
and 4 node Power 7. Surely I will keep you informed and eager to know the
results of your experiments.
--
Thanks and Regards
Srikar Dronamraju
Powered by blists - more mailing lists