lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZD/8R6sacS45ggyt@dhcp22.suse.cz>
Date:   Wed, 19 Apr 2023 16:35:51 +0200
From:   Michal Hocko <mhocko@...e.com>
To:     Marcelo Tosatti <mtosatti@...hat.com>
Cc:     Frederic Weisbecker <frederic@...nel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Christoph Lameter <cl@...ux.com>,
        Aaron Tomlin <atomlin@...mlin.com>,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        Russell King <linux@...linux.org.uk>,
        Huacai Chen <chenhuacai@...nel.org>,
        Heiko Carstens <hca@...ux.ibm.com>, x86@...nel.org,
        Vlastimil Babka <vbabka@...e.cz>
Subject: Re: [PATCH v7 00/13] fold per-CPU vmstats remotely

On Wed 19-04-23 10:48:03, Marcelo Tosatti wrote:
> On Wed, Apr 19, 2023 at 02:24:01PM +0200, Frederic Weisbecker wrote:
[...]
> > 2) Run critical code
> > 3) Optionally do something once you're done
> > 
> > If vmstat is going to be the only thing to wait for on 1), then the remote
> > solution looks good enough (although I leave that to -mm guys as I'm too
> > clueless about those matters), 
> 
> I am mostly clueless too, but i don't see a problem with the proposed
> patch (and no one has pointed any problem either).

I really hate to repeat myself again. The biggest pushback has been on
a) justification and b) single purpose solution which is very likely
incomplete. For a) we are getting the story piece by piece which doesn't
speed up the process. You are proposing a non-trivial change to an
already convoluted code so having a solid justification is something
that shouldn't be all that surprising.

b) is what concerns me more though. There are other per-cpu specific
things going on that require some regular flushing. Just to mention
another one that your group has been brought up was the memcg pcp
caches. Again with a non-trivial proposal to deal with that problem
[1]. It has turned out that we can do a simpler thing [2]. I do not
think it is a stretch to expect that similar things will pop out every
now and then and rather than dealing with each one in its own way it
kinda makes sense to come up with a more general concept so that all
those cases can be handled at a single place at least. All I hear about
that is that the code of those special applications would need to be
changed to use that. Well, true but is that bar so impractical that we
are going to grow kernel complexity and therefore a maintenance burden?
Everything for a very specialized workloads?

[1] http://lkml.kernel.org/r/20221102020243.522358-1-leobras@redhat.com
[2] http://lkml.kernel.org/r/20230317134448.11082-1-mhocko@kernel.org
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ