lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Sat, 2 Nov 2019 09:13:42 +0800
From:   王贇 <yun.wang@...ux.alibaba.com>
To:     Michal Koutný <mkoutny@...e.com>
Cc:     Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] sched/numa: advanced per-cgroup numa statistic

Hi, Michal

On 2019/11/2 上午1:39, Michal Koutný wrote:
> Hello Yun.
> 
> On Tue, Oct 29, 2019 at 03:57:20PM +0800, 王贇 <yun.wang@...ux.alibaba.com> wrote:
>> +static void update_numa_statistics(struct cfs_rq *cfs_rq)
>> +{
>> +	int idx;
>> +	unsigned long remote = current->numa_faults_locality[3];
>> +	unsigned long local = current->numa_faults_locality[4];
>> +
>> +	cfs_rq->nstat.jiffies++;
> This statistics effectively doubles what
> kernel/sched/cpuacct.c:cpuacct_charge() does (measuring per-cpu time).
> Hence it seems redundant.

Yes, while there is no guarantee the cpu cgroup always binding
with cpuacct in v1, we can't rely on that...

> 
>> +
>> +	if (!remote && !local)
>> +		return;
>> +
>> +	idx = (NR_NL_INTERVAL - 1) * local / (remote + local);
>> +	cfs_rq->nstat.locality[idx]++;
> IIUC, the mechanism numa_faults_locality values, this statistics only
> estimates the access locality based on NUMA balancing samples, i.e.
> there exists more precise source of that information.>
> All in all, I'd concur to Mel's suggestion of external measurement.

Currently I can only find numa balancing who is telling the real story,
at least we know after the PF, task do access the page on that CPU,
although it can't cover all the cases, it still giving good hints :-)

It would be great if we could find more similar indicators, like the
migration failure counter Mel mentioned, which give good hints on
memory policy problems, could be used as external measurement.

Regards,
Michael Wang

> 
> Michal
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ