lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <567be36f-d4ef-e5bc-e11c-3718272d3dfe@gentwo.org>
Date: Fri, 26 Sep 2025 09:21:29 -0700 (PDT)
From: "Christoph Lameter (Ampere)" <cl@...two.org>
To: Joshua Hahn <joshua.hahnjy@...il.com>
cc: Andrew Morton <akpm@...ux-foundation.org>, 
    Johannes Weiner <hannes@...xchg.org>, Chris Mason <clm@...com>, 
    Kiryl Shutsemau <kirill@...temov.name>, 
    Brendan Jackman <jackmanb@...gle.com>, Michal Hocko <mhocko@...e.com>, 
    Suren Baghdasaryan <surenb@...gle.com>, Vlastimil Babka <vbabka@...e.cz>, 
    Zi Yan <ziy@...dia.com>, linux-kernel@...r.kernel.org, linux-mm@...ck.org, 
    kernel-team@...a.com
Subject: Re: [PATCH v2 2/4] mm/page_alloc: Perform appropriate batching in
 drain_pages_zone

On Thu, 25 Sep 2025, Joshua Hahn wrote:

> > So we need an explanation as to why there is such high contention on the
> > lock first before changing the logic here.
> >
> > The current logic seems to be designed to prevent the lock contention you
> > are seeing.
>
> This is true, but my concern was mostly with the value that is being used
> for the batching (2048 seems too high). But as I explain below, it seems
> like the min(2048, count) operation is a no-op anyways, since it is never
> called with count > 1000 (at least from the benchmarks that I was running,
> on my machine).


The problem is that you likely increase zone lock contention with a
reduced batch size.

Actually that there is a lock in the pcp structure is weird and causes
cacheline bouncing on such hot paths. Access should be only from the cpu
that owns this structure. Remote cleaning (if needed) can be triggered via
IPIs.

This is the way it used to be and the way it was tested for high core
counts years ago.

You seem to run 176 cores here so its similar to what we tested way back
when. If all cores are accessing the pcp structure then you have
significant cacheline bouncing. Removing the lock and going back to the
IPI solution would likely remove the problem.

The cachelines of the allocator per cpu structures are usually very hot
and should only be touched in rare circumstances from other cpus.

Having a loop over all processors accessing all the hot percpus structurs
is likely causing significant performance issues and therefore the issues
that you are seeing here.




Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ