[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOUHufZabH85CeUN-MEMgL8gJGzJEWUrkiM58JkTbBhh-jew0Q@mail.gmail.com>
Date: Sat, 17 Sep 2022 02:24:44 -0600
From: Yu Zhao <yuzhao@...gle.com>
To: Liam Howlett <liam.howlett@...cle.com>
Cc: "maple-tree@...ts.infradead.org" <maple-tree@...ts.infradead.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH v14 00/70] Introducing the Maple Tree
On Thu, Sep 15, 2022 at 12:03 PM Yu Zhao <yuzhao@...gle.com> wrote:
>
> On Sun, Sep 11, 2022 at 6:20 PM Andrew Morton <akpm@...ux-foundation.org> wrote:
> >
> > On Tue, 6 Sep 2022 19:48:38 +0000 Liam Howlett <liam.howlett@...cle.com> wrote:
> >
> > > Patch series "Introducing the Maple Tree".
> >
> > I haven't seen any issues attributed to maple tree in 2+ weeks. Unless
> > there be weighty objections, I plan to move this series into mm-stable
> > soon after mglru is added. Perhaps a week from now.
>
> Tested-by: Yu Zhao <yuzhao@...gle.com>
>
> stress/fuzzing: arm64, mips64, ppc64 and x86_64
> performance: arm64 (nodejs), mips64 (memcached), ppc64 (specjbb2015)
> and x86_64 (mmtests)
> boot: riscv64
> not covered: m68knommu and s390 (no hardware available)
This should be easy to fix:
======================================================
WARNING: possible circular locking dependency detected
6.0.0-dbg-DEV #1 Tainted: G S O
------------------------------------------------------
stress-ng/21813 is trying to acquire lock:
ffffffff9b043388 (fs_reclaim){+.+.}-{0:0}, at:
kmem_cache_alloc_bulk+0x3f/0x460
but task is already holding lock:
ffffa2a509f8d080 (&anon_vma->rwsem){++++}-{3:3}, at: do_brk_flags+0x19d/0x410
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (&anon_vma->rwsem){++++}-{3:3}:
down_read+0x3c/0x50
folio_lock_anon_vma_read+0x147/0x180
rmap_walk_anon+0x55/0x230
try_to_unmap+0x65/0xa0
shrink_folio_list+0x8c5/0x1c70
evict_folios+0x6af/0xb50
lru_gen_shrink_lruvec+0x1b6/0x430
shrink_lruvec+0xa7/0x470
shrink_node_memcgs+0x116/0x1f0
shrink_node+0xb4/0x2e0
balance_pgdat+0x3b9/0x710
kswapd+0x2b1/0x320
kthread+0xe5/0x100
ret_from_fork+0x1f/0x30
-> #0 (fs_reclaim){+.+.}-{0:0}:
__lock_acquire+0x16f4/0x30c0
lock_acquire+0xb2/0x190
fs_reclaim_acquire+0x57/0xd0
kmem_cache_alloc_bulk+0x3f/0x460
mas_alloc_nodes+0x148/0x1e0
mas_nomem+0x45/0x90
mas_store_gfp+0xf3/0x160
do_brk_flags+0x1f2/0x410
__do_sys_brk+0x214/0x3b0
__x64_sys_brk+0x12/0x20
do_syscall_64+0x3d/0x80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(&anon_vma->rwsem);
lock(fs_reclaim);
lock(&anon_vma->rwsem);
lock(fs_reclaim);
*** DEADLOCK ***
2 locks held by stress-ng/21813:
#0: ffffa285087f2a58 (&mm->mmap_lock#2){++++}-{3:3}, at:
__do_sys_brk+0x98/0x3b0
#1: ffffa2a509f8d080 (&anon_vma->rwsem){++++}-{3:3}, at:
do_brk_flags+0x19d/0x410
Powered by blists - more mailing lists