lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191101133528.GP28938@suse.de>
Date:   Fri, 1 Nov 2019 13:35:28 +0000
From:   Mel Gorman <mgorman@...e.de>
To:     ?????? <yun.wang@...ux.alibaba.com>
Cc:     Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] sched/numa: advanced per-cgroup numa statistic

On Fri, Nov 01, 2019 at 07:52:15PM +0800, ?????? wrote:
> > a much higher degree of flexibility on what information is tracked and
> > allow flexibility on 
> > 
> > So, overall I think this can be done outside the kernel but recognise
> > that it may not be suitable in all cases. If you feel it must be done
> > inside the kernel, split out the patch that adds information on failed
> > page migrations as it stands apart. Put it behind its own kconfig entry
> > that is disabled by default -- do not tie it directly to NUMA balancing
> > because of the data structure changes. When enabled, it should still be
> > disabled by default at runtime and only activated via kernel command line
> > parameter so that the only people who pay the cost are those that take
> > deliberate action to enable it.
> 
> Agree, we could have these per-task faults info there, give the possibility
> to implement maybe a practical userland tool,

I'd prefer not because that would still require the space in the locality
array to store the data. I'd also prefer that numa_faults_locality[]
information is not exposed unless this feature is enabled. That information
is subject to change and interpreting it requires knowledge of the
internals of automatic NUMA balancing.

There are just too many corner cases where the information is garbage.
Tasks with a memory policy would never update the counters, short-lived
tasks may not update it, interleaving will give confused information about
locality, the timing of the reads matter because it might be cleared,
the frequency at which they clear is unknown as the frequency is adaptive
-- the list goes on. I find it very very difficult to believe that a
tool based on faults_locality will be able to give anything but the
most superficial help and any sensible decision will require ftrace or
numa_maps to get real information.

> meanwhile have these kernel
> numa data disabled by default, folks who got no tool but want to do easy
> monitoring can just turn on the switch :-)
> 
> Will have these in next version:
> 
>  * separate patch for showing per-task faults info

Please only expose the failed= (or migfailed=) in that patch. Do not
expose numa_faults_locality unless it is explicitly enabled on behalf of
a tool that claims it can sensibly interpret it.

>  * new CONFIG for numa stat (disabled by default)
>  * dynamical runtime switch for numa stat (disabled by default)

Dynamic runtime enabling will mean that if it's turned on, the information
will be temporarily useless until stats are accumulated. Make sure to
note that in any associated documentation stating a preference to
enabling it with a kernel parameter.

-- 
Mel Gorman
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ