[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8a6659ba-13ba-b9be-08c8-f02f106d55fb@I-love.SAKURA.ne.jp>
Date: Sat, 23 Apr 2022 20:48:28 +0900
From: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
To: Kent Overstreet <kent.overstreet@...il.com>
Cc: Michal Hocko <mhocko@...e.com>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, linux-fsdevel@...r.kernel.org, hch@....de,
hannes@...xchg.org, akpm@...ux-foundation.org,
linux-clk@...r.kernel.org, linux-tegra@...r.kernel.org,
linux-input@...r.kernel.org, rostedt@...dmis.org,
Roman Gushchin <roman.gushchin@...ux.dev>
Subject: Re: [PATCH v2 8/8] mm: Centralize & improve oom reporting in
show_mem.c
On 2022/04/23 10:25, Roman Gushchin wrote:
>>> I agree. However the OOM killer _has_ to make the progress even in such rare
>>> circumstances.
>>
>> Oh, and the concern is allocator recursion? Yeah, that's a good point.
>
> Yes, but not the only problem.
>
>>
>> Do you know if using memalloc_noreclaim_(save|restore) is sufficient for that,
>> or do we want GFP_ATOMIC? I'm already using GFP_ATOMIC for allocations when we
>> generate the report on slabs, since we're taking the slab mutex there.
>
> And this is another problem: grabbing _any_ locks from the oom context is asking
> for trouble: you can potentially enter the oom path doing any allocation, so
> now you have to check that no allocations are ever made holding this lock.
> And I'm not aware of any reasonable way to test it, so most likely it ends up
> introducing some very subtle bags, which will be triggered once a year.
>
You can't allocate memory nor hold locks from OOM context. Since oom_lock mutex
serializes OOM reporting, you could use statically pre-allocated buffer for holding
one line of output. Correlating whole report will be done by the userspace program
with the aid of CONFIG_PRINTK_CALLER=y.
Powered by blists - more mailing lists