[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <89d49efa-23a5-4bed-cd81-0de05500c518@linux.alibaba.com>
Date: Mon, 2 Dec 2019 10:11:14 +0800
From: 王贇 <yun.wang@...ux.alibaba.com>
To: Michal Koutný <mkoutny@...e.com>
Cc: Mel Gorman <mgorman@...e.de>, Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>,
Luis Chamberlain <mcgrof@...nel.org>,
Kees Cook <keescook@...omium.org>,
Iurii Zaikin <yzaikin@...gle.com>,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-doc@...r.kernel.org,
"Paul E. McKenney" <paulmck@...ux.ibm.com>
Subject: Re: [PATCH v2 1/3] sched/numa: advanced per-cgroup numa statistic
On 2019/11/29 下午6:06, Michal Koutný wrote:
> On Fri, Nov 29, 2019 at 01:19:33PM +0800, 王贇 <yun.wang@...ux.alibaba.com> wrote:
>> I did some research regarding cpuacct, and find cpuacct_charge() is a good
>> place to do hierarchical update, however, what we get there is the execution
>> time delta since last update_curr().
> I wouldn't extend cpuacct, I'd like to look into using the rstat
> mechanism for per-CPU runtime collection. (Most certainly I won't get
> down to this until mid December though.)
>
>> I'm afraid we can't just do local/remote accumulation since the sample period
>> now is changing, still have to accumulate the execution time into locality
>> regions.y
> My idea was to decouple time from the locality counters completely. It'd
> be up to the monitoring application to normalize differences wrt
> sampling rate (and handle wrap arounds).
I see, basically I understand your proposal as utilize cpuacct's runtime
and only expose per-cgroup local/remote counters, I'm not sure if the
locality still helpful after decouple time factor from it, both need some
investigation, anyway, once I could convince myself it's working, I'll
be happy to make things simple ;-)
Regards,
Michael Wang
>
>
> Michal
>
Powered by blists - more mailing lists