[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <534fa596-0c29-0f1e-b292-53ad9c3dbbe3@I-love.SAKURA.ne.jp>
Date: Fri, 15 Jul 2022 07:24:55 +0900
From: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
To: Jan Kara <jack@...e.cz>, linux-mm@...ck.org
Cc: jack@...e.com, linux-ext4@...r.kernel.org,
linux-kernel@...r.kernel.org, syzkaller-bugs@...glegroups.com,
tytso@....edu, linux-fsdevel@...r.kernel.org
Subject: Re: [syzbot] possible deadlock in start_this_handle (3)
On 2022/07/14 23:18, Jan Kara wrote:
> Hello,
>
> so this lockdep report looks real but is more related to OOM handling than
> to ext4 as such. The immediate problem I can see is that
> mem_cgroup_print_oom_meminfo() which is called under oom_lock calls
> memory_stat_format() which does GFP_KERNEL allocations to allocate buffers
> for dumping of MM statistics. This creates oom_lock -> fs reclaim
> dependency and because OOM can be hit (and thus oom_lock acquired) in
> practically any allocation (regardless of GFP_NOFS) this has a potential of
> creating real deadlock cycles.
>
> So should mem_cgroup_print_oom_meminfo() be using
> memalloc_nofs_save/restore() to avoid such deadlocks? Or perhaps someone
> sees another solution? Generally allocating memory to report OOM looks a
> bit dangerous to me ;).
>
> Honza
I think mem_cgroup_print_oom_meminfo() should use GFP_ATOMIC, for it will fall into
infinite loop if kmalloc(GFP_NOFS) under oom_lock reached __alloc_pages_may_oom() path.
Powered by blists - more mailing lists