[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0bbbb7d8-699b-30ac-9657-840112c41a78@huawei.com>
Date: Tue, 22 Aug 2023 11:41:41 +0800
From: Tong Tiangen <tongtiangen@...wei.com>
To: Matthew Wilcox <willy@...radead.org>
CC: Andrew Morton <akpm@...ux-foundation.org>,
Naoya Horiguchi <naoya.horiguchi@....com>,
Miaohe Lin <linmiaohe@...wei.com>,
<wangkefeng.wang@...wei.com>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2] mm: memory-failure: use rcu lock instead of
tasklist_lock when collect_procs()
在 2023/8/22 2:33, Matthew Wilcox 写道:
> On Mon, Aug 21, 2023 at 05:13:12PM +0800, Tong Tiangen wrote:
>> We found a softlock issue in our test, analyzed the logs, and found that
>> the relevant CPU call trace as follows:
>>
>> CPU0:
>> _do_fork
>> -> copy_process()
>> -> write_lock_irq(&tasklist_lock) //Disable irq,waiting for
>> //tasklist_lock
>>
>> CPU1:
>> wp_page_copy()
>> ->pte_offset_map_lock()
>> -> spin_lock(&page->ptl); //Hold page->ptl
>> -> ptep_clear_flush()
>> -> flush_tlb_others() ...
>> -> smp_call_function_many()
>> -> arch_send_call_function_ipi_mask()
>> -> csd_lock_wait() //Waiting for other CPUs respond
>> //IPI
>>
>> CPU2:
>> collect_procs_anon()
>> -> read_lock(&tasklist_lock) //Hold tasklist_lock
>> ->for_each_process(tsk)
>> -> page_mapped_in_vma()
>> -> page_vma_mapped_walk()
>> -> map_pte()
>> ->spin_lock(&page->ptl) //Waiting for page->ptl
>>
>> We can see that CPU1 waiting for CPU0 respond IPI,CPU0 waiting for CPU2
>> unlock tasklist_lock, CPU2 waiting for CPU1 unlock page->ptl. As a result,
>> softlockup is triggered.
>>
>> For collect_procs_anon(), we will not modify the tasklist, but only perform
>> read traversal. Therefore, we can use rcu lock instead of spin lock
>> tasklist_lock, from this, we can break the softlock chain above.
>
> The only thing that's giving me pause is that there's no discussion
> about why this is safe. "We're not modifying it" isn't really enough
> to justify going from read_lock() to rcu_read_lock(). When you take a
> normal read_lock(), writers are not permitted and so you see an atomic
> snapshot of the list. With rcu_read_lock() you can see inconsistencies.
Hi Matthew:
When rcu_read_lock() is used, the task list can be modified during the
iteration, but cannot be seen during iteration. After the iteration is
complete, the task list can be updated in the RCU mechanism. Therefore,
the task list used by iteration can also be considered as a snapshot.
> For example, if new tasks can be added to the tasklist, they may not
> be seen by an iteration. Is this OK?
The newly added tasks does not access the HWPoison page, because the
HWPoison page has been isolated from the
buddy(memory_failure()->take_page_off_buddy()). Therefore, it is safe to
see the newly added task during the iteration and not be seen by iteration.
Tasks may be removed from the
> tasklist after they have been seen by the iteration. Is this OK?
Task be seen during iteration are deleted from the task list after
iteration, it's task_struct is not released because reference counting
is added in __add_to_kill(). Therefore, the subsequent processing of
kill_procs() is not affected (sending signals to the task deleted from
task list). so i think it's safe too.
>
> As I understand the list RCU code, it guarantees that all tasks which
> were on the list before rcu_read_lock() and remain on the list after
> rcu_read_unlock() will be seen by a list iteration, while tasks which
> are added or removed during that time may or may not be seen.
As described above, i understand that the write update is not visible
during the RCU read.
Thanks,
Tong.
>
> .
Powered by blists - more mailing lists