[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZbOQC0mYNsX0voKM@tiehlicka>
Date: Fri, 26 Jan 2024 11:57:15 +0100
From: Michal Hocko <mhocko@...e.com>
To: Zach O'Keefe <zokeefe@...gle.com>
Cc: Charan Teja Kalla <quic_charante@...cinc.com>,
akpm@...ux-foundation.org, mgorman@...hsingularity.net,
david@...hat.com, vbabka@...e.cz, hannes@...xchg.org,
quic_pkondeti@...cinc.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
Axel Rasmussen <axelrasmussen@...gle.com>,
Yosry Ahmed <yosryahmed@...gle.com>,
David Rientjes <rientjes@...gle.com>
Subject: Re: [PATCH V3 3/3] mm: page_alloc: drain pcp lists before oom kill
On Fri 26-01-24 16:17:04, Charan Teja Kalla wrote:
> Hi Michal/Zach,
>
> On 1/25/2024 10:06 PM, Zach O'Keefe wrote:
> > Thanks for the patch, Charan, and thanks to Yosry for pointing me towards it.
> >
> > I took a look at data from our fleet, and there are many cases on
> > high-cpu-count machines where we find multi-GiB worth of data sitting
> > on pcpu free lists at the time of system oom-kill, when free memory
> > for the relevant zones are below min watermarks. I.e. clear cases
> > where this patch could have prevented OOM.
> >
> > This kind of issue scales with the number of cpus, so presumably this
> > patch will only become increasingly valuable to both datacenters and
> > desktops alike going forward. Can we revamp it as a standalone patch?
Do you have any example OOM reports? There were recent changes to scale
the pcp pages and it would be good to know whether they work reasonably
well even under memory pressure.
I am not objecting to the patch discussed here but it would be really
good to understand the underlying problem and the scale of it.
Thanks!
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists