lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200730162348.GA679955@carbon.dhcp.thefacebook.com>
Date:   Thu, 30 Jul 2020 09:23:48 -0700
From:   Roman Gushchin <guro@...com>
To:     Hugh Dickins <hughd@...gle.com>
CC:     Andrew Morton <akpm@...ux-foundation.org>,
        Johannes Weiner <hannes@...xchg.org>,
        Michal Hocko <mhocko@...nel.org>,
        Vlastimil Babka <vbabka@...e.cz>, <linux-mm@...ck.org>,
        <kernel-team@...com>, <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2] mm: vmstat: fix /proc/sys/vm/stat_refresh generating
 false warnings

On Wed, Jul 29, 2020 at 08:45:47PM -0700, Hugh Dickins wrote:
> On Tue, 14 Jul 2020, Roman Gushchin wrote:
> 
> > I've noticed a number of warnings like "vmstat_refresh: nr_free_cma
> > -5" or "vmstat_refresh: nr_zone_write_pending -11" on our production
> > hosts. The numbers of these warnings were relatively low and stable,
> > so it didn't look like we are systematically leaking the counters.
> > The corresponding vmstat counters also looked sane.
> 
> nr_zone_write_pending: yes, I've looked at our machines, and see that
> showing up for us too (-49 was the worst I saw).  Not at all common,
> but seen.  And not followed by increasingly worse numbers, so a state
> that corrects itself.  nr_dirty too (fewer instances, bigger numbers);
> but never nr_writeback, which you'd expect to go along with those.

NR_DIRTY and NR_WRITEBACK are node counters, so we don't check them?

> 
> I wish I could explain that (I've wondered if somewhere delays
> incrementing the stat, and can be interrupted by the decrementer of
> the stat before it has reached incrementing), but have not succeeded.
> Perhaps it is all down to the vmstat_refresh() skid that you hide in
> this patch; but I'm not convinced.
> 
> > 
> > These warnings are generated by the vmstat_refresh() function, which
> > assumes that atomic zone and numa counters can't go below zero.
> > However, on a SMP machine it's not quite right: due to per-cpu
> > caching it can in theory be as low as -(zone threshold) * NR_CPUs.
> > 
> > For instance, let's say all cma pages are in use and NR_FREE_CMA_PAGES
> > reached 0. Then we've reclaimed a small number of cma pages on each
> > CPU except CPU0, so that most percpu NR_FREE_CMA_PAGES counters are
> > slightly positive (the atomic counter is still 0). Then somebody on
> > CPU0 consumes all these pages. The number of pages can easily exceed
> > the threshold and a negative value will be committed to the atomic
> > counter.
> > 
> > To fix the problem and avoid generating false warnings, let's just
> > relax the condition and warn only if the value is less than minus
> > the maximum theoretically possible drift value, which is 125 *
> > number of online CPUs. It will still allow to catch systematic leaks,
> > but will not generate bogus warnings.
> 
> Sorry, but despite the acks of others, I want to NAK this in its
> present form.

Sorry to hear this.
> 
> You're right that there's a possibility of a spurious warning,
> but is that so terrible?

We do collect all warnings fleet-wide and false warnings are
creating a noise, which makes it easier to miss real problems.
Of course, we can filter out these particular warnings, but then
what's the point of generating them?

In my opinion such warnings with a bad signal/ratio aren't good because
initially they cause somebody to look into the "problem" and waste
their time, and later they are usually just ignored, even when the real
problem appears.

In this particular case I was testing some cma-related changes,
and when I saw a bunch a new warnings (cma was not used on these hosts
before), I was concerned that something's wrong with my changes.

> 
> I'm imagining a threshold of 125, and 128 cpus, and the following
> worst case.  Yes, it is possible that when vmstat_refresh() schedules
> its refresh on all the cpus, that the first 127 cpus to complete all
> sync a count of 0, but immediately after each allocates 125 of whatever
> (raising their per-cpu counters without touching the global counter);
> and then, before the 128th cpu starts its sync, somehow that 128th
> gets to be the lucky cpu to free all of those 127*125 items,
> so arriving at a mistaken count of -15875 for that stat.

First, I have to agree, 125 * number of cpus is definitely a very high
number, so it's extremely unlikely that any vmstat value will reach it
in the real life. I'm totally happy to go with a (much) lower limit,
I just have no good idea how to justify any particular number below.
I you can suggest something, I'd appreciate it a lot.
I like the number 400 :)

> 
> And I guess you could even devise a test which conspires to do that.
> But is it so likely, that it's worth throwing away the warning when
> we leak (or perhaps it's unleak) 16000 huge pages?  I don't think so:
> I think it makes those warnings pretty much useless, and it would be
> better just to delete the code that issues them.

Of course, it's an option too (delete the warnings at all).

> 
> But there's other things you could do, that we can probably agree on.
> 
> When stat_refresh first went in, there were two stats (NR_ALLOC_BATCH
> and NR_PAGES_SCANNED) which were peculiarly accounted, and gave rise
> to negative reports: the original commit just filtered those cases out
> in a switch.  Maybe you should filter out NR_ZONE_WRITE_PENDING and
> NR_FREE_CMA_PAGES, if there's nothing to fix in their accounting.

In fact, there is nothing specific to NR_ZONE_WRITE_PENDING and
NR_FREE_CMA_PAGES, any value which is often bouncing around 0 can go
negative, and it's not an indication of any issues.

> 
> I'm not sure exactly what your objection is to the warning: would
> you prefer pr_info or pr_debug to pr_warn?  I'd prefer to leave it
> as pr_warn, but can compromise.

Yeah, we can go with pr_debug as well.

> 
> (IIRC xfstests can fail a test if an unexpected message appears
> in the log; but xfstests does not use /proc/sys/vm/stat_refresh.)
> 
> But a better idea is perhaps to redefine the behavior of
> "echo >/proc/sys/vm/stat_refresh".  What if
> "echo someparticularstring >/proc/sys/vm/stat_refresh" were to
> disable or enable the warning (permanently? or just that time?):
> disable would be more "back-compatible", but I think it's okay
> if you prefer enable.  Or "someparticularstring" could actually
> specify the warning threshold you want to use - you might echo
> 125 or 16000, I might echo 0.  We can haggle over the default.

May I ask you, what kind of problems you have in your in mind,
which can be revealed by these warnings? Or maybe there is some
history attached?

If it's all about some particular counters, which are known to be
strictly positive, maybe we should do the opposite, and check only
those counters? Because in general it's not an indication of a problem.

> 
> And there's a simpler change we already made internally: we didn't
> mind the warning at all, but the accompanying -EINVALs became very
> annoying.  A lot of testing scripts have "set -e" in them, and for
> test B of feature B to fail because earlier test A of feature A
> had tickled a bug in A that wrapped some stat negative, that
> was very irritating.  We deleted those "err = -EINVAL;"s -
> which might be what's actually most annoying you too?

> 
> Nit in this patch called out below.
> 
> Hugh
> 
> > 
> > Signed-off-by: Roman Gushchin <guro@...com>
> > Cc: Hugh Dickins <hughd@...gle.com>
> > ---
> >  Documentation/admin-guide/sysctl/vm.rst |  4 ++--
> >  mm/vmstat.c                             | 30 ++++++++++++++++---------
> >  2 files changed, 21 insertions(+), 13 deletions(-)
> > 
> > diff --git a/Documentation/admin-guide/sysctl/vm.rst b/Documentation/admin-guide/sysctl/vm.rst
> > index 4b9d2e8e9142..95fb80d0c606 100644
> > --- a/Documentation/admin-guide/sysctl/vm.rst
> > +++ b/Documentation/admin-guide/sysctl/vm.rst
> > @@ -822,8 +822,8 @@ e.g. cat /proc/sys/vm/stat_refresh /proc/meminfo
> >  
> >  As a side-effect, it also checks for negative totals (elsewhere reported
> >  as 0) and "fails" with EINVAL if any are found, with a warning in dmesg.
> > -(At time of writing, a few stats are known sometimes to be found negative,
> > -with no ill effects: errors and warnings on these stats are suppressed.)
> > +(On a SMP machine some stats can temporarily become negative, with no ill
> > +effects: errors and warnings on these stats are suppressed.)
> >  
> >  
> >  numa_stat
> > diff --git a/mm/vmstat.c b/mm/vmstat.c
> > index a21140373edb..8f0ef8aaf8ee 100644
> > --- a/mm/vmstat.c
> > +++ b/mm/vmstat.c
> > @@ -169,6 +169,8 @@ EXPORT_SYMBOL(vm_node_stat);
> >  
> >  #ifdef CONFIG_SMP
> >  
> > +#define MAX_THRESHOLD 125
> > +
> >  int calculate_pressure_threshold(struct zone *zone)
> >  {
> >  	int threshold;
> > @@ -186,11 +188,9 @@ int calculate_pressure_threshold(struct zone *zone)
> >  	threshold = max(1, (int)(watermark_distance / num_online_cpus()));
> >  
> >  	/*
> > -	 * Maximum threshold is 125
> > +	 * Threshold is capped by MAX_THRESHOLD
> >  	 */
> > -	threshold = min(125, threshold);
> > -
> > -	return threshold;
> > +	return min(MAX_THRESHOLD, threshold);
> >  }
> >  
> >  int calculate_normal_threshold(struct zone *zone)
> 
> calculate_normal_threshold() also contains a 125:
> better change that to MAX_THRESHOLD too, if you do pursue this.

Totally agree, good catch.

Thanks!

Roman

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ