[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7a6170fc-b247-e327-321a-b99fb53f552d@i-love.sakura.ne.jp>
Date: Wed, 11 Mar 2020 18:34:49 +0900
From: Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>
To: David Rientjes <rientjes@...gle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Vlastimil Babka <vbabka@...e.cz>,
Michal Hocko <mhocko@...nel.org>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [patch] mm, oom: prevent soft lockup on memcg oom for UP systems
On 2020/03/11 7:55, David Rientjes wrote:
> On Wed, 11 Mar 2020, Tetsuo Handa wrote:
>
>>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>>> --- a/mm/vmscan.c
>>> +++ b/mm/vmscan.c
>>> @@ -2637,6 +2637,8 @@ static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc)
>>> unsigned long reclaimed;
>>> unsigned long scanned;
>>>
>>> + cond_resched();
>>> +
>>
>> Is this safe for CONFIG_PREEMPTION case? If current thread has realtime priority,
>> can we guarantee that the OOM victim (well, the OOM reaper kernel thread rather
>> than the OOM victim ?) gets scheduled?
>>
>
> I think it's the best we can do that immediately solves the issue unless
> you have another idea in mind?
"schedule_timeout_killable(1) outside of oom_lock" or "the OOM reaper grabs oom_lock
so that allocating threads guarantee that the OOM reaper gets scheduled" or "direct OOM
reaping so that allocating threads guarantee that some memory is reclaimed".
>
>>> switch (mem_cgroup_protected(target_memcg, memcg)) {
>>> case MEMCG_PROT_MIN:
>>> /*
>>>
>>
Powered by blists - more mailing lists