lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 22 Aug 2017 16:19:30 -0700
From:   Andi Kleen <andi@...stfloor.org>
To:     Christopher Lameter <cl@...ux.com>
Cc:     Kemi Wang <kemi.wang@...el.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Michal Hocko <mhocko@...e.com>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Johannes Weiner <hannes@...xchg.org>,
        Dave <dave.hansen@...ux.intel.com>,
        Jesper Dangaard Brouer <brouer@...hat.com>,
        Ying Huang <ying.huang@...el.com>,
        Aaron Lu <aaron.lu@...el.com>, Tim Chen <tim.c.chen@...el.com>,
        Linux MM <linux-mm@...ck.org>,
        Linux Kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 0/2] Separate NUMA statistics from zone statistics

Christopher Lameter <cl@...ux.com> writes:

> Can we simple get rid of the stats or make then configurable (off by
> defaut)? I agree they are rarely used and have been rarely used in the past.
>
> Maybe some instrumentation for perf etc will allow
> similar statistics these days? Thus its possible to drop them?
>
> The space in the pcp pageset is precious and we should strive to use no
> more than a cacheline for the diffs.

The statistics are useful and we need them sometimes. And more and more
runtime switches are a pain -- if you need them they would be likely
turned off. The key is just to make them cheap enough that they're not a
problem.

The only problem was just that that the standard vmstats which are
optimized for readers too are too expensive for them.

The motivation for the patch was that the frequent atomics
were proven to slow the allocator down, and Kemi's patch
fixed it and he has shown it with lots of data.

I don't really see the point of so much discussion about a single cache
line.

There are lots of cache lines used all over the VM. Why is this one
special? Adding one more shouldn't be that bad.

But there's no data at all that touching another cache line
here is a problem.

It's next to an already touched cache line, so it's highly
likely that a prefetcher would catch it anyways.

I can see the point of worrying about over all cache line foot print
("death of a thousand cuts") but the right way to address problems like
this is use a profiler in a realistic workload and systematically
look at the code who actually has cache misses. And I bet we would
find quite a few that could be easily avoided and have real
payoff. I would really surprise me if it was this cache line.

But blocking real demonstrated improvements over a theoretical
cache line doesn't really help.

-Andi

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ