lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 6 Jul 2017 16:47:05 +0100
From:   Mel Gorman <mgorman@...hsingularity.net>
To:     Roman Gushchin <guro@...com>
Cc:     linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
        Johannes Weiner <hannes@...xchg.org>,
        Michal Hocko <mhocko@...e.com>,
        Vladimir Davydov <vdavydov.dev@...il.com>,
        Rik van Riel <riel@...hat.com>, kernel-team@...com,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: make allocation counters per-order

On Thu, Jul 06, 2017 at 03:46:34PM +0100, Roman Gushchin wrote:
> > The alloc counter updates are themselves a surprisingly heavy cost to
> > the allocation path and this makes it worse for a debugging case that is
> > relatively rare. I'm extremely reluctant for such a patch to be added
> > given that the tracepoints can be used to assemble such a monitor even
> > if it means running a userspace daemon to keep track of it. Would such a
> > solution be suitable? Failing that if this is a severe issue, would it be
> > possible to at least make this a compile-time or static tracepoint option?
> > That way, only people that really need it have to take the penalty.
> 
> I've tried to measure the difference with my patch applied and without
> any accounting at all (__count_alloc_event() redefined to an empty function),
> and I wasn't able to find any measurable difference.
> Can you, please, provide more details, how your scenario looked like,
> when alloc coutners were costly?
> 

At the time I used a page allocator microbenchmark from mmtests to call
the allocator directly without zeroing pages. Triggering allocations from
userspace generally mask the overhead by the zeroing costs. It's just a few
cycles but given the budget for the page allocator in some circumstances
is tiny, it was noticable. perf was used to examine the cost.

> As new counters replace an old one, and both are per-cpu counters, I believe,
> that the difference should be really small.
> 

Minimally you add a new branch and a small number of computations. It's
small but it's there. The cache footprint of the counters is also increased.
That is hard to take given that it's overhead for everybody on the off-chance
it can debug something.

It's not a strong objection and I won't nak it on this basis but given
that the same information can be easily obtained using tracepoints
(optionally lower overhead with systemtap), the information is rarely
going to be useful (no latency information for example) and there is an
increased maintenance cost then it does not seem to be that useful.

Maybe it would be slightly more convincing if there was an example of
real problems in the field that can be debugged with this. For high-order
allocations, I previously found that it was the latency that was of the
most concern and not the absolute count that happened since the system
started. Granted, the same criticism could be leveled at the existing
alloc counters but at least by correlating that value with allocstall,
you can determine what percentage of allocations stalled recently and
optionally ftrace at that point to figure out why. The same steps would
indicate then if it's only high-order allocations that stall, add stack
tracing to figure out where they are coming from and go from there. Even if
the per-order counters exist, all the other debugging steps are necessary
so I'm struggling to see how I would use them properly.

-- 
Mel Gorman
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ