lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 2 Mar 2017 17:30:54 +0100
From:   Michal Hocko <mhocko@...nel.org>
To:     Johannes Weiner <hannes@...xchg.org>
Cc:     Shaohua Li <shli@...com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, Kernel-team@...com,
        minchan@...nel.org, hughd@...gle.com, riel@...hat.com,
        mgorman@...hsingularity.net, akpm@...ux-foundation.org
Subject: Re: [PATCH V5 6/6] proc: show MADV_FREE pages info in smaps

On Thu 02-03-17 09:01:01, Johannes Weiner wrote:
> On Wed, Mar 01, 2017 at 07:57:35PM +0100, Michal Hocko wrote:
> > On Wed 01-03-17 13:31:49, Johannes Weiner wrote:
[...]
> > > The error when reading a specific smaps should be completely ok.
> > > 
> > > In numbers: even if your process is madvising from 16 different CPUs,
> > > the error in its smaps file will peak at 896K in the worst case. That
> > > level of concurrency tends to come with much bigger memory quantities
> > > for that amount of error to matter.
> > 
> > It is still an unexpected behavior IMHO and an implementation detail
> > which leaks to the userspace.
> 
> We have per-cpu fuzz in every single vmstat counter. Look at
> calculate_normal_threshold() in vmstat.c and the sample thresholds for
> when per-cpu deltas are flushed. In the vast majority of machines, the
> per-cpu error in these counters is much higher than what we get with
> pagevecs holding back a few pages.

Yes but vmstat counters have a different usecase AFAIK. You mostly look
at those when debugging or watching the system. /proc/<pid>/smaps is
quite often used to do per task metrics which are then used for some
decision making so it should be less fuzzy if that is possible.

> It's not that I think you're wrong: it *is* an implementation detail.
> But we take a bit of incoherency from batching all over the place, so
> it's a little odd to take a stand over this particular instance of it
> - whether demanding that it'd be fixed, or be documented, which would
> only suggest to users that this is special when it really isn't etc.

I am not aware of other counter printed in smaps that would suffer from
the same problem, but I haven't checked too deeply so I might be wrong. 

Anyway it seems that I am alone in my position so I will not insist.
If we have any bug report then we can still fix it.

-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ