[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e008fef6-06d2-28d3-f4d3-229f4b181b4f@linux.alibaba.com>
Date: Thu, 28 Nov 2019 21:41:37 +0800
From: 王贇 <yun.wang@...ux.alibaba.com>
To: Michal Koutný <mkoutny@...e.com>
Cc: Mel Gorman <mgorman@...e.de>, Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>,
Luis Chamberlain <mcgrof@...nel.org>,
Kees Cook <keescook@...omium.org>,
Iurii Zaikin <yzaikin@...gle.com>,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-doc@...r.kernel.org,
"Paul E. McKenney" <paulmck@...ux.ibm.com>
Subject: Re: [PATCH v2 1/3] sched/numa: advanced per-cgroup numa statistic
On 2019/11/28 下午8:39, Michal Koutný wrote:
> Hello.
>
> My primary concern is still the measuring of per-NUMA node execution
> time.
>
> First, I think exposing the aggregated data into the numa_stat file is
> loss of information. The data are collected per-CPU and then summed over
> NUMA nodes -- this could be easily done by the userspace consumer of the
> data, keeping the per-CPU data available.
>
> Second, comparing with the cpuacct implementation, yours has only jiffy
> granularity (I may have overlooked something or I miss some context,
> then it's a non-concern).
There are used to be a discussion on this, Peter mentioned we no longer
expose raw ticks into userspace and micro seconds could be fine.
Basically we use this to calculate percentages, for which jiffy could be
accurate enough :-)
>
> IOW, to me it sounds like duplicating cpuacct job and if that is deemed
> useful for cgroup v2, I think it should be done (only once) and at
> proper place (i.e. how cputime is measured in the default hierarchy).
But still, what if folks don't use v2... any good suggestions?
>
> The previous two are design/theoretical remarks, however, your patch
> misses measuring of other than fair_sched_class policy tasks. Is that
> intentional?
Yes, since they don't have NUMA balancing to do optimization, and
generally they are not that much.
>
> My last two comments are to locality measurement but are based on no
> experience or specific knowledge.
>
> The seven percentile groups seem quite arbitrary to me, I find it
> strange that the ratio of cache-line size and u64 leaks and is fixed in
> the generally visible file. Wouldn't such a form be better hidden under
> a _DEBUG config option?
Sorry but I don't get it... at first it was 10 regions, as Peter suggested
we pick 8, but now to insert member 'jiffies' it become 7, the address of
'jiffies' is cache aligned, so we pick u64 * 8 == 64Bytes to make sure the
whole thing could be load in cache once a time, or did I misunderstand
something?
>
>
> On Thu, Nov 28, 2019 at 10:09:13AM +0800, 王贇 <yun.wang@...ux.alibaba.com> wrote:
>> Consider it as load_1/5/15 which not accurate but tell the trend of system
> I understood your patchset provides cumulative data over time, i.e. if
> a user wants to see an immediate trend, they have to calculate
> differences. Have I overlooked some back-off or regular zeroing?
Yes, here what I try to highlight is the similar usage, but not the way of
monitoring ;-) as the docs tell, we monitoring increments.
Regards,
Michale Wang
>
> Michal
>
Powered by blists - more mailing lists