[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3b2c5a07-4bc0-1feb-2daf-260e4d58c7b6@linux.alibaba.com>
Date: Fri, 7 Feb 2020 09:10:33 +0800
From: 王贇 <yun.wang@...ux.alibaba.com>
To: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Luis Chamberlain <mcgrof@...nel.org>,
Kees Cook <keescook@...omium.org>,
Iurii Zaikin <yzaikin@...gle.com>,
Michal Koutný <mkoutny@...e.com>,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-doc@...r.kernel.org,
"Paul E. McKenney" <paulmck@...ux.ibm.com>,
Randy Dunlap <rdunlap@...radead.org>,
Jonathan Corbet <corbet@....net>
Subject: Re: [PATCH v8 0/2] sched/numa: introduce numa locality
Hi, Peter, Ingo
Could you give some comments please?
As Mel replied previously, he won't disagree the idea, so we're looking
forward the opinion from the maintainers.
Please allow me to highlight the necessary of monitoring NUMA Balancing
again, this feature is critical to the performance on NUMA platform,
it cost and benefit -- lot or less, however there are not enough
information for an admin to analysis the trade-off, while locality could
be the missing piece.
Regards,
Michael Wang
On 2020/1/21 上午9:56, 王贇 wrote:
> v8:
> * document edited
> v7:
> * rebased on latest linux-next
> v6:
> * fix compile failure when NUMA disabled
> v5:
> * improved documentation
> v4:
> * fix comments and improved documentation
> v3:
> * simplified the locality concept & implementation
> v2:
> * improved documentation
>
> Modern production environment could use hundreds of cgroup to control
> the resources for different workloads, along with the complicated
> resource binding.
>
> On NUMA platforms where we have multiple nodes, things become even more
> complicated, we hope there are more local memory access to improve the
> performance, and NUMA Balancing keep working hard to achieve that,
> however, wrong memory policy or node binding could easily waste the
> effort, result a lot of remote page accessing.
>
> We need to notice such problems, then we got chance to fix it before
> there are too much damages, however, there are no good monitoring
> approach yet to help catch the mouse who introduced the remote access.
>
> This patch set is trying to fill in the missing pieces, by introduce
> the per-cgroup NUMA locality info, with this new statistics, we could
> achieve the daily monitoring on NUMA efficiency, to give warning when
> things going too wrong.
>
> Please check the second patch for more details.
>
> Michael Wang (2):
> sched/numa: introduce per-cgroup NUMA locality info
> sched/numa: documentation for per-cgroup numa statistics
>
> Documentation/admin-guide/cg-numa-stat.rst | 178 ++++++++++++++++++++++++
> Documentation/admin-guide/index.rst | 1 +
> Documentation/admin-guide/kernel-parameters.txt | 4 +
> Documentation/admin-guide/sysctl/kernel.rst | 9 ++
> include/linux/sched.h | 15 ++
> include/linux/sched/sysctl.h | 6 +
> init/Kconfig | 11 ++
> kernel/sched/core.c | 75 ++++++++++
> kernel/sched/fair.c | 62 +++++++++
> kernel/sched/sched.h | 12 ++
> kernel/sysctl.c | 11 ++
> 11 files changed, 384 insertions(+)
> create mode 100644 Documentation/admin-guide/cg-numa-stat.rst
>
Powered by blists - more mailing lists