[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YFh10eSTKY5lbE9u@dhcp22.suse.cz>
Date: Mon, 22 Mar 2021 11:47:45 +0100
From: Michal Hocko <mhocko@...e.com>
To: Aaron Tomlin <atomlin@...hat.com>
Cc: linux-mm@...ck.org, akpm@...ux-foundation.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm/page_alloc: try oom if reclaim is unable to make
forward progress
On Fri 19-03-21 17:29:01, Aaron Tomlin wrote:
> Hi Michal,
>
> On Thu 2021-03-18 17:16 +0100, Michal Hocko wrote:
> > On Mon 15-03-21 16:58:37, Aaron Tomlin wrote:
> > > In the situation where direct reclaim is required to make progress for
> > > compaction but no_progress_loops is already over the limit of
> > > MAX_RECLAIM_RETRIES consider invoking the oom killer.
>
> Firstly, thank you for your response.
>
> > What is the problem you are trying to fix?
>
> If I understand correctly, in the case of a "costly" order allocation
> request that is permitted to repeatedly retry, it is possible to exceed the
> maximum reclaim retry threshold as long as "some" progress is being made
> even at the highest compaction priority.
Costly orders already do have heuristics for the retry in place. Could
you be more specific what kind of problem you see with those?
> Furthermore, if the allocator has a fatal signal pending, this is not
> considered.
Fatal signals pending are usually not a strong reason to cut retries
count or fail allocations.
> In my opinion, it might be better to just give up straight away or try and
> use the OOM killer only in the non-costly order allocation scenario to
> assit reclaim. Looking at __alloc_pages_may_oom() the current logic is to
> entirely skip the OOM killer for a costly order request, which makes sense.
Well, opinions might differ of course. The main question is whether
there are workloads which are unhappy about the existing behavior.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists