[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250919130644.f3a4afdf0c2e51bbec59b6e0@linux-foundation.org>
Date: Fri, 19 Sep 2025 13:06:44 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Joshua Hahn <joshua.hahnjy@...il.com>
Cc: Johannes Weiner <hannes@...xchg.org>, Chris Mason <clm@...com>, Kiryl
Shutsemau <kirill@...temov.name>, "Liam R. Howlett"
<Liam.Howlett@...cle.com>, Brendan Jackman <jackmanb@...gle.com>, David
Hildenbrand <david@...hat.com>, Lorenzo Stoakes
<lorenzo.stoakes@...cle.com>, Michal Hocko <mhocko@...e.com>, Mike Rapoport
<rppt@...nel.org>, Suren Baghdasaryan <surenb@...gle.com>, Vlastimil Babka
<vbabka@...e.cz>, Zi Yan <ziy@...dia.com>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, kernel-team@...a.com
Subject: Re: [PATCH 0/4] mm/page_alloc: Batch callers of free_pcppages_bulk
On Fri, 19 Sep 2025 12:52:18 -0700 Joshua Hahn <joshua.hahnjy@...il.com> wrote:
> While testing workloads with high sustained memory pressure on large machines
> (1TB memory, 316 CPUs), we saw an unexpectedly high number of softlockups.
> Further investigation showed that the lock in free_pcppages_bulk was being held
> for a long time, even being held while 2k+ pages were being freed [1].
What problems are caused by this, apart from a warning which can
presumably be suppressed in some fashion?
> This causes starvation in other processes for both the pcp and zone locks,
> which can lead to softlockups that cause the system to stall [2].
[2] doesn't describe such stalls.
>
> ...
>
> In our fleet, we have seen that performing batched lock freeing has led to
> significantly lower rates of softlockups, while incurring relatively small
> regressions (relative to the workload and relative to the variation).
"our" == Meta?
Powered by blists - more mailing lists