[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20230821005610.GA2251898@ik1-406-35019.vs.sakura.ne.jp>
Date: Mon, 21 Aug 2023 09:56:10 +0900
From: Naoya Horiguchi <naoya.horiguchi@...ux.dev>
To: Tong Tiangen <tongtiangen@...wei.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Naoya Horiguchi <naoya.horiguchi@....com>,
Miaohe Lin <linmiaohe@...wei.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, wangkefeng.wang@...wei.com,
Guohanjun <guohanjun@...wei.com>
Subject: Re: [RFC PATCH -next] mm: fix softlockup by replacing tasklist_lock
with RCU in for_each_process()
On Fri, Aug 18, 2023 at 05:26:34PM +0800, Tong Tiangen wrote:
>
>
> 在 2023/8/17 13:36, Naoya Horiguchi 写道:
> > On Tue, Aug 15, 2023 at 09:01:54PM +0800, Tong Tiangen wrote:
> > > We found a softlock issue in our test, analyzed the logs, and found that
> > > the relevant CPU call trace as follows:
> > >
> > > CPU0:
> > > _do_fork
> > > -> copy_process()
> > > -> write_lock_irq(&tasklist_lock) //Disable irq,waiting for
> > > //tasklist_lock
> > >
> > > CPU1:
> > > wp_page_copy()
> > > ->pte_offset_map_lock()
> > > -> spin_lock(&page->ptl); //Hold page->ptl
> > > -> ptep_clear_flush()
> > > -> flush_tlb_others() ...
> > > -> smp_call_function_many()
> > > -> arch_send_call_function_ipi_mask()
> > > -> csd_lock_wait() //Waiting for other CPUs respond
> > > //IPI
> > >
> > > CPU2:
> > > collect_procs_anon()
> > > -> read_lock(&tasklist_lock) //Hold tasklist_lock
> > > ->for_each_process(tsk)
> > > -> page_mapped_in_vma()
> > > -> page_vma_mapped_walk()
> > > -> map_pte()
> > > ->spin_lock(&page->ptl) //Waiting for page->ptl
> > >
> > > We can see that CPU1 waiting for CPU0 respond IPI,CPU0 waiting for CPU2
> > > unlock tasklist_lock, CPU2 waiting for CPU1 unlock page->ptl. As a result,
> > > softlockup is triggered.
> > >
> > > For collect_procs_anon(), we will not modify the tasklist, but only perform
> > > read traversal. Therefore, we can use rcu lock instead of spin lock
> > > tasklist_lock, from this, we can break the softlock chain above.
> > >
> > > The same logic can also be applied to:
> > > - collect_procs_file()
> > > - collect_procs_fsdax()
> > > - collect_procs_ksm()
> > > - find_early_kill_thread()
> > >
> > > Signed-off-by: Tong Tiangen <tongtiangen@...wei.com>
> >
> > Hello Tiangen, thank you for finding the issue.
> > mm/filemap.c mentions tasklist_lock in the comment about locking order,
> >
> > * ->i_mmap_rwsem
> > * ->tasklist_lock (memory_failure, collect_procs_ao)
> >
> > so you can update this together?
> > Otherwise looks good to me.
> >
> > Thanks,
> > Naoya Horiguchi
>
> Thank you for your reply. Since tasklist_lock is no longer used in
> collect_procs_xxx(), Should I delete these two comments in mm/filemap.c?
Yes, I think you should.
- Naoya Horiguchi
Powered by blists - more mailing lists