[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7061b9f4-b7aa-4dad-858c-53ee186c2d8f@arm.com>
Date: Tue, 5 Mar 2024 09:00:54 +0000
From: Ryan Roberts <ryan.roberts@....com>
To: Barry Song <21cnbao@...il.com>
Cc: akpm@...ux-foundation.org, david@...hat.com,
linux-kernel@...r.kernel.org, linux-mm@...ck.org, mhocko@...e.com,
shy828301@...il.com, wangkefeng.wang@...wei.com, willy@...radead.org,
xiang@...nel.org, ying.huang@...el.com, yuzhao@...gle.com,
chrisl@...nel.org, surenb@...gle.com, hanchuanhua@...o.com
Subject: Re: [PATCH v3 4/4] mm: swap: Swap-out small-sized THP without
splitting
Hi Barry,
On 18/02/2024 23:40, Barry Song wrote:
> On Tue, Feb 6, 2024 at 1:14 AM Ryan Roberts <ryan.roberts@....com> wrote:
>>
>> On 05/02/2024 09:51, Barry Song wrote:
>>> +Chris, Suren and Chuanhua
>>>
>>> Hi Ryan,
[...]
>>
>
> Hi Ryan,
> I am running into some races especially while enabling large folio swap-out and
> swap-in both. some of them, i am still struggling with the detailed
> timing how they
> are happening.
> but the below change can help remove those bugs which cause corrupted data.
I'm getting quite confused with all the emails flying around on this topic. Here
you were reporting a data corruption bug and your suggested fix below is the one
you have now posted at [1]. But in the thread at [1] we concluded that it is not
fixing a functional correctness issue, but is just an optimization in some
corner cases. So does the corruption issue still manifest? Did you manage to
root cause it? Is it a problem with my swap-out series or your swap-in series,
or pre-existing?
[1] https://lore.kernel.org/linux-mm/20240304103757.235352-1-21cnbao@gmail.com/
Thanks,
Ryan
>
> index da2aab219c40..ef9cfbc84760 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1953,6 +1953,16 @@ static unsigned int shrink_folio_list(struct
> list_head *folio_list,
>
> if (folio_test_pmd_mappable(folio))
> flags |= TTU_SPLIT_HUGE_PMD;
> + /*
> + * make try_to_unmap_one hold ptl from the very first
> + * beginning if we are reclaiming a folio with multi-
> + * ptes. otherwise, we may only reclaim a part of the
> + * folio from the middle.
> + * for example, a parallel thread might temporarily
> + * set pte to none for various purposes.
> + */
> + else if (folio_test_large(folio))
> + flags |= TTU_SYNC;
>
> try_to_unmap(folio, flags);
> if (folio_mapped(folio)) {
>
>
> While we are swapping-out a large folio, it has many ptes, we change those ptes
> to swap entries in try_to_unmap_one(). "while (page_vma_mapped_walk(&pvmw))"
> will iterate all ptes within the large folio. but it will only begin
> to acquire ptl when
> it meets a valid pte as below /* xxxxxxx */
>
> static bool map_pte(struct page_vma_mapped_walk *pvmw, spinlock_t **ptlp)
> {
> pte_t ptent;
>
> if (pvmw->flags & PVMW_SYNC) {
> /* Use the stricter lookup */
> pvmw->pte = pte_offset_map_lock(pvmw->vma->vm_mm, pvmw->pmd,
> pvmw->address, &pvmw->ptl);
> *ptlp = pvmw->ptl;
> return !!pvmw->pte;
> }
>
> ...
> pvmw->pte = pte_offset_map_nolock(pvmw->vma->vm_mm, pvmw->pmd,
> pvmw->address, ptlp);
> if (!pvmw->pte)
> return false;
>
> ptent = ptep_get(pvmw->pte);
>
> if (pvmw->flags & PVMW_MIGRATION) {
> if (!is_swap_pte(ptent))
> return false;
> } else if (is_swap_pte(ptent)) {
> swp_entry_t entry;
> ...
> entry = pte_to_swp_entry(ptent);
> if (!is_device_private_entry(entry) &&
> !is_device_exclusive_entry(entry))
> return false;
> } else if (!pte_present(ptent)) {
> return false;
> }
> pvmw->ptl = *ptlp;
> spin_lock(pvmw->ptl); /* xxxxxxx */
> return true;
> }
>
>
> for various reasons, for example, break-before-make for clearing access flags
> etc. pte can be set to none. since page_vma_mapped_walk() doesn't hold ptl
> from the beginning, it might only begin to set swap entries from the middle of
> a large folio.
>
> For example, in case a large folio has 16 ptes, and 0,1,2 are somehow zero
> in the intermediate stage of a break-before-make, ptl will be held
> from the 3rd pte,
> and swap entries will be set from 3rd pte as well. it seems not good as we are
> trying to swap out a large folio, but we are swapping out a part of them.
>
> I am still struggling with all the timing of races, but using PVMW_SYNC to
> explicitly ask for ptl from the first pte seems a good thing for large folio
> regardless of those races. it can avoid try_to_unmap_one reading intermediate
> pte and further make the wrong decision since reclaiming pte-mapped large
> folios is atomic with just one pte.
>
>> Sorry I haven't progressed this series as I had hoped. I've been concentrating
>> on getting the contpte series upstream. I'm hoping I will find some time to move
>> this series along by the tail end of Feb (hoping to get it in shape for v6.10).
>> Hopefully that doesn't cause you any big problems?
>
> no worries. Anyway, we are already using your code to run various tests.
>
>>
>> Thanks,
>> Ryan
>
> Thanks
> Barry
Powered by blists - more mailing lists