[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191101173936.GB16165@blackbody.suse.cz>
Date: Fri, 1 Nov 2019 18:39:36 +0100
From: Michal Koutný <mkoutny@...e.com>
To: 王贇 <yun.wang@...ux.alibaba.com>
Cc: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] sched/numa: advanced per-cgroup numa statistic
Hello Yun.
On Tue, Oct 29, 2019 at 03:57:20PM +0800, 王贇 <yun.wang@...ux.alibaba.com> wrote:
> +static void update_numa_statistics(struct cfs_rq *cfs_rq)
> +{
> + int idx;
> + unsigned long remote = current->numa_faults_locality[3];
> + unsigned long local = current->numa_faults_locality[4];
> +
> + cfs_rq->nstat.jiffies++;
This statistics effectively doubles what
kernel/sched/cpuacct.c:cpuacct_charge() does (measuring per-cpu time).
Hence it seems redundant.
> +
> + if (!remote && !local)
> + return;
> +
> + idx = (NR_NL_INTERVAL - 1) * local / (remote + local);
> + cfs_rq->nstat.locality[idx]++;
IIUC, the mechanism numa_faults_locality values, this statistics only
estimates the access locality based on NUMA balancing samples, i.e.
there exists more precise source of that information.
All in all, I'd concur to Mel's suggestion of external measurement.
Michal
Download attachment "signature.asc" of type "application/pgp-signature" (834 bytes)
Powered by blists - more mailing lists