lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <138f3057-8aab-4bfb-a541-dbf1a51a32bb@suse.cz>
Date: Wed, 1 Oct 2025 13:23:47 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: "Christoph Lameter (Ampere)" <cl@...two.org>,
 Joshua Hahn <joshua.hahnjy@...il.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
 Johannes Weiner <hannes@...xchg.org>, Chris Mason <clm@...com>,
 Kiryl Shutsemau <kirill@...temov.name>, Brendan Jackman
 <jackmanb@...gle.com>, Michal Hocko <mhocko@...e.com>,
 Suren Baghdasaryan <surenb@...gle.com>, Zi Yan <ziy@...dia.com>,
 linux-kernel@...r.kernel.org, linux-mm@...ck.org, kernel-team@...a.com,
 Mel Gorman <mgorman@...hsingularity.net>
Subject: Re: [PATCH v2 2/4] mm/page_alloc: Perform appropriate batching in
 drain_pages_zone

On 9/26/25 6:21 PM, Christoph Lameter (Ampere) wrote:
> On Thu, 25 Sep 2025, Joshua Hahn wrote:
> 
>>> So we need an explanation as to why there is such high contention on the
>>> lock first before changing the logic here.
>>>
>>> The current logic seems to be designed to prevent the lock contention you
>>> are seeing.
>>
>> This is true, but my concern was mostly with the value that is being used
>> for the batching (2048 seems too high). But as I explain below, it seems
>> like the min(2048, count) operation is a no-op anyways, since it is never
>> called with count > 1000 (at least from the benchmarks that I was running,
>> on my machine).
> 
> 
> The problem is that you likely increase zone lock contention with a
> reduced batch size.
> 
> Actually that there is a lock in the pcp structure is weird and causes
> cacheline bouncing on such hot paths. Access should be only from the cpu

The hot paths only access the lock local to them so should not cause
bouncing.

> that owns this structure. Remote cleaning (if needed) can be triggered via
> IPIs.

It used to be that way but Mel changed it to the current implementation
few years ago. IIRC one motivation was to avoid disabling irqs (that
provide exclusion with IPI handlers), hence the spin_trylock() approach
locally and spin_lock() for remote flushing.

Today we could use local_trylock() instead of spin_trylock()
theoretically. The benefit is being inline, unlike spin_trylock() (on
x86). But an IPI handler (that must succeed and can't give up if the
lock is already taken by the operation it interrupted) wouldn't work
with that - it can't give up nor "spin". So the remote flushes would
need to use queue/flush work instead and then the preempt disable +
local_trylock() would be enough (work handler can't interrupt a preempt
disabled section). I don't know if that would make the remote flushes
too expensive though or whether they only happen in such slowpaths to be
acceptable.

> This is the way it used to be and the way it was tested for high core
> counts years ago.
> 
> You seem to run 176 cores here so its similar to what we tested way back
> when. If all cores are accessing the pcp structure then you have
> significant cacheline bouncing. Removing the lock and going back to the
> IPI solution would likely remove the problem.

I doubt the problem here is about cacheline bouncing of pcp. AFAIK it's
free_frozen_page_commit() will be called under preempt_disable()
(pcpu_spin_trylock does that) and do a potentially long
free_pcppages_bulk() operation under spin_lock_irqsave(&zone->lock). So
multiple cpus with similarly long free_pcppages_bulk() will spin on the
zone lock with irqs disabled.
Breaking down the time zone lock is held to smaller batches will help
that and reduce the irqs disabled time. But there might be still long
preemption disabled times for the pcp, and that's IIRC enough to cause
rcu_sched stalls? So patch 4/4 also relinquishes the pcp lock itself
(i.e. enables preemption), which we already saw from the lkp report
isn't trivial to do. But none of this is about pcp cacheline bouncing,
AFAICS.

> The cachelines of the allocator per cpu structures are usually very hot
> and should only be touched in rare circumstances from other cpus.

It should be rare enough to not be an issue.

> Having a loop over all processors accessing all the hot percpus structurs
> is likely causing significant performance issues and therefore the issues
> that you are seeing here.
> 
> 
> 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ