[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <42800224-ed45-2e37-1960-0da29eb3bc38@linux.alibaba.com>
Date: Fri, 27 Dec 2019 10:22:45 +0800
From: 王贇 <yun.wang@...ux.alibaba.com>
To: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Luis Chamberlain <mcgrof@...nel.org>,
Kees Cook <keescook@...omium.org>,
Iurii Zaikin <yzaikin@...gle.com>,
Michal Koutný <mkoutny@...e.com>,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-doc@...r.kernel.org,
"Paul E. McKenney" <paulmck@...ux.ibm.com>,
Randy Dunlap <rdunlap@...radead.org>,
Jonathan Corbet <corbet@....net>
Subject: Re: [PATCH v6 0/2] sched/numa: introduce numa locality
Hi folks, is there any more comments?
Regards,
Michael Wang
On 2019/12/13 上午9:43, 王贇 wrote:
> Since v5:
> * fix compile failure when NUMA disabled
> Since v4:
> * improved documentation
> Since v3:
> * fix comments and improved documentation
> Since v2:
> * simplified the locality concept & implementation
> Since v1:
> * improved documentation
>
> Modern production environment could use hundreds of cgroup to control
> the resources for different workloads, along with the complicated
> resource binding.
>
> On NUMA platforms where we have multiple nodes, things become even more
> complicated, we hope there are more local memory access to improve the
> performance, and NUMA Balancing keep working hard to achieve that,
> however, wrong memory policy or node binding could easily waste the
> effort, result a lot of remote page accessing.
>
> We need to notice such problems, then we got chance to fix it before
> there are too much damages, however, there are no good monitoring
> approach yet to help catch the mouse who introduced the remote access.
>
> This patch set is trying to fill in the missing pieces, by introduce
> the per-cgroup NUMA locality info, with this new statistics, we could
> achieve the daily monitoring on NUMA efficiency, to give warning when
> things going too wrong.
>
> Please check the second patch for more details.
>
> Michael Wang (2):
> sched/numa: introduce per-cgroup NUMA locality info
> sched/numa: documentation for per-cgroup numa statistics
>
> Documentation/admin-guide/cg-numa-stat.rst | 178 ++++++++++++++++++++++++
> Documentation/admin-guide/index.rst | 1 +
> Documentation/admin-guide/kernel-parameters.txt | 4 +
> Documentation/admin-guide/sysctl/kernel.rst | 9 ++
> include/linux/sched.h | 15 ++
> include/linux/sched/sysctl.h | 6 +
> init/Kconfig | 11 ++
> kernel/sched/core.c | 75 ++++++++++
> kernel/sched/fair.c | 62 +++++++++
> kernel/sched/sched.h | 12 ++
> kernel/sysctl.c | 11 ++
> 11 files changed, 384 insertions(+)
> create mode 100644 Documentation/admin-guide/cg-numa-stat.rst
>
Powered by blists - more mailing lists