[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.21.2003161648370.47327@chino.kir.corp.google.com>
Date: Mon, 16 Mar 2020 16:59:02 -0700 (PDT)
From: David Rientjes <rientjes@...gle.com>
To: Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>
cc: Andrew Morton <akpm@...ux-foundation.org>,
Vlastimil Babka <vbabka@...e.cz>,
Michal Hocko <mhocko@...nel.org>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [patch] mm, oom: prevent soft lockup on memcg oom for UP
systems
On Sat, 14 Mar 2020, Tetsuo Handa wrote:
> > If current thread is
> > an OOM victim, schedule_timeout_killable(1) will give other threads (including
> > the OOM reaper kernel thread) CPU time to run.
>
> If current thread is an OOM victim, schedule_timeout_killable(1) will give other
> threads (including the OOM reaper kernel thread) CPU time to run, by leaving
> try_charge() path due to should_force_charge() == true and reaching do_exit() path
> instead of returning to userspace code doing "for (;;);".
>
> Unless the problem is that current thread cannot reach should_force_charge() check,
> schedule_timeout_killable(1) should work.
>
No need to yield if current is the oom victim, allowing the oom reaper to
run when it may not actually be able to free memory is not required. It
increases the likelihood that some other process schedules and is unable
to yield back due to the memcg oom condition such that the victim doesn't
get a chance to run again.
This happens because the victim is allowed to overcharge but other
processes attached to an oom memcg hierarchy simply fail the charge. We
are then reliant on all memory chargers in the kernel to yield if their
charges fail due to oom. It's the only way to allow the victim to
eventually run.
So the only change that I would make to your patch is to do this in
mem_cgroup_out_of_memory() instead:
if (!fatal_signal_pending(current))
schedule_timeout_killable(1);
So we don't have this reliance on all other memory chargers to yield when
their charge fails and there is no delay for victims themselves.
[ I'll still propose my change that adds cond_resched() to
shrink_node_memcgs() because we can see need_resched set for a
prolonged period of time without scheduling. ]
If you agree, I'll propose your patch with a changelog that indicates it
can fix the soft lockup issue for UP and can likely get a tested-by for
it.
Powered by blists - more mailing lists