[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5c7f25f9-f86b-8e15-8603-e212b9911cac@quicinc.com>
Date: Fri, 10 Nov 2023 22:06:22 +0530
From: Charan Teja Kalla <quic_charante@...cinc.com>
To: Michal Hocko <mhocko@...e.com>
CC: <akpm@...ux-foundation.org>, <mgorman@...hsingularity.net>,
<david@...hat.com>, <vbabka@...e.cz>, <hannes@...xchg.org>,
<quic_pkondeti@...cinc.com>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH V3 3/3] mm: page_alloc: drain pcp lists before oom kill
Thanks Michal!!
On 11/9/2023 4:03 PM, Michal Hocko wrote:
>> VM system running with ~50MB of memory shown the below stats during OOM
>> kill:
>> Normal free:760kB boost:0kB min:768kB low:960kB high:1152kB
>> reserved_highatomic:0KB managed:49152kB free_pcp:460kB
>>
>> Though in such system state OOM kill is imminent, but the current kill
>> could have been delayed if the pcp is drained as pcp + free is even
>> above the high watermark.
> TBH I am not sure this is really worth it. Does it really reduce the
> risk of the OOM in any practical situation?
At least in my particular stress test case it just delayed the OOM as i
can see that at the time of OOM kill, there are no free pcp pages. My
understanding of the OOM is that it should be the last resort and only
after doing the enough reclaim retries. CMIW here.
This patch just aims to not miss the corner case where we hit the OOM
without draining the pcp lists. And after draining, some systems may not
need the oom and some may still need the oom. My case is the later here
so I am really not sure If we ever encountered/noticed the former case here.
>
Powered by blists - more mailing lists