[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <0825c4b6-377d-f9ef-034d-648cfd675e2c@i-love.sakura.ne.jp>
Date: Fri, 6 Sep 2019 20:11:19 +0900
From: Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>
To: Michal Hocko <mhocko@...nel.org>
Cc: David Rientjes <rientjes@...gle.com>, linux-mm@...ck.org,
Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH] mm, oom: disable dump_tasks by default
On 2019/09/06 20:02, Michal Hocko wrote:
> On Fri 06-09-19 19:46:10, Tetsuo Handa wrote:
>> On 2019/09/05 23:08, Michal Hocko wrote:
>>> On Thu 05-09-19 22:39:47, Tetsuo Handa wrote:
>>> [...]
>>>> There is nothing that prevents users from enabling oom_dump_tasks by sysctl.
>>>> But that requires a solution for OOM stalling problem.
>>>
>>> You can hardly remove stalling if you are not reducing the amount of
>>> output or get it into a different context. Whether the later is
>>> reasonable is another question but you are essentially losing "at the
>>> OOM event state".
>>>
>>
>> I am not losing "at the OOM event state". Please find "struct oom_task_info"
>> (for now) embedded into "struct task_struct" which holds "at the OOM event state".
>>
>> And my patch moves "printk() from dump_tasks()" from OOM context to WQ context.
>
> Workers might be blocked for unbound amount of time and so this
> information might be printed late.
>
Yes, but the OOM reaper will quickly reclaim memory. And if WQ is blocked, new WQ
for processing this work will be created (because OOM situation is quickly solved).
Nonetheless if your worry turns out to be a real problem, we can use a dedicated WQ
or offload to e.g. the OOM reaper kernel thread. Anyway, such tuning is beyond the
scope of my patch.
Powered by blists - more mailing lists