lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191128123924.GD831@blackbody.suse.cz>
Date:   Thu, 28 Nov 2019 13:39:24 +0100
From:   Michal Koutný <mkoutny@...e.com>
To:     王贇 <yun.wang@...ux.alibaba.com>
Cc:     Mel Gorman <mgorman@...e.de>, Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>,
        Luis Chamberlain <mcgrof@...nel.org>,
        Kees Cook <keescook@...omium.org>,
        Iurii Zaikin <yzaikin@...gle.com>,
        linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
        linux-doc@...r.kernel.org,
        "Paul E. McKenney" <paulmck@...ux.ibm.com>
Subject: Re: [PATCH v2 1/3] sched/numa: advanced per-cgroup numa statistic

Hello.

My primary concern is still the measuring of per-NUMA node execution
time.

First, I think exposing the aggregated data into the numa_stat file is
loss of information. The data are collected per-CPU and then summed over
NUMA nodes -- this could be easily done by the userspace consumer of the
data, keeping the per-CPU data available.

Second, comparing with the cpuacct implementation, yours has only jiffy
granularity (I may have overlooked something or I miss some context,
then it's a non-concern).

IOW, to me it sounds like duplicating cpuacct job and if that is deemed
useful for cgroup v2, I think it should be done (only once) and at
proper place (i.e. how cputime is measured in the default hierarchy).

The previous two are design/theoretical remarks, however, your patch
misses measuring of other than fair_sched_class policy tasks. Is that
intentional?

My last two comments are to locality measurement but are based on no
experience or specific knowledge.

The seven percentile groups seem quite arbitrary to me, I find it
strange that the ratio of cache-line size and u64 leaks and is fixed in
the generally visible file. Wouldn't such a form be better hidden under
a _DEBUG config option?


On Thu, Nov 28, 2019 at 10:09:13AM +0800, 王贇 <yun.wang@...ux.alibaba.com> wrote:
> Consider it as load_1/5/15 which not accurate but tell the trend of system
I understood your patchset provides cumulative data over time, i.e. if
a user wants to see an immediate trend, they have to calculate
differences. Have I overlooked some back-off or regular zeroing?

Michal

Download attachment "signature.asc" of type "application/pgp-signature" (834 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ