[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87fshj3jrv.fsf@yhuang6-desk2.ccr.corp.intel.com>
Date: Fri, 26 Aug 2022 08:56:04 +0800
From: "Huang, Ying" <ying.huang@...el.com>
To: Alistair Popple <apopple@...dia.com>
Cc: <linux-mm@...ck.org>, <akpm@...ux-foundation.org>,
Peter Xu <peterx@...hat.com>,
Nadav Amit <nadav.amit@...il.com>,
huang ying <huang.ying.caritas@...il.com>,
LKML <linux-kernel@...r.kernel.org>,
"Sierra Guiza, Alejandro (Alex)" <alex.sierra@....com>,
Felix Kuehling <Felix.Kuehling@....com>,
Jason Gunthorpe <jgg@...dia.com>,
John Hubbard <jhubbard@...dia.com>,
David Hildenbrand <david@...hat.com>,
Ralph Campbell <rcampbell@...dia.com>,
Matthew Wilcox <willy@...radead.org>,
Karol Herbst <kherbst@...hat.com>,
Lyude Paul <lyude@...hat.com>, Ben Skeggs <bskeggs@...hat.com>,
Logan Gunthorpe <logang@...tatee.com>, <paulus@...abs.org>,
<linuxppc-dev@...ts.ozlabs.org>, <stable@...r.kernel.org>
Subject: Re: [PATCH v3 1/3] mm/migrate_device.c: Flush TLB while holding PTL
Alistair Popple <apopple@...dia.com> writes:
> "Huang, Ying" <ying.huang@...el.com> writes:
>
>> Alistair Popple <apopple@...dia.com> writes:
>>
>>> When clearing a PTE the TLB should be flushed whilst still holding the
>>> PTL to avoid a potential race with madvise/munmap/etc. For example
>>> consider the following sequence:
>>>
>>> CPU0 CPU1
>>> ---- ----
>>>
>>> migrate_vma_collect_pmd()
>>> pte_unmap_unlock()
>>> madvise(MADV_DONTNEED)
>>> -> zap_pte_range()
>>> pte_offset_map_lock()
>>> [ PTE not present, TLB not flushed ]
>>> pte_unmap_unlock()
>>> [ page is still accessible via stale TLB ]
>>> flush_tlb_range()
>>>
>>> In this case the page may still be accessed via the stale TLB entry
>>> after madvise returns. Fix this by flushing the TLB while holding the
>>> PTL.
>>>
>>> Signed-off-by: Alistair Popple <apopple@...dia.com>
>>> Reported-by: Nadav Amit <nadav.amit@...il.com>
>>> Fixes: 8c3328f1f36a ("mm/migrate: migrate_vma() unmap page from vma while collecting pages")
>>> Cc: stable@...r.kernel.org
>>>
>>> ---
>>>
>>> Changes for v3:
>>>
>>> - New for v3
>>> ---
>>> mm/migrate_device.c | 5 +++--
>>> 1 file changed, 3 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/mm/migrate_device.c b/mm/migrate_device.c
>>> index 27fb37d..6a5ef9f 100644
>>> --- a/mm/migrate_device.c
>>> +++ b/mm/migrate_device.c
>>> @@ -254,13 +254,14 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
>>> migrate->dst[migrate->npages] = 0;
>>> migrate->src[migrate->npages++] = mpfn;
>>> }
>>> - arch_leave_lazy_mmu_mode();
>>> - pte_unmap_unlock(ptep - 1, ptl);
>>>
>>> /* Only flush the TLB if we actually modified any entries */
>>> if (unmapped)
>>> flush_tlb_range(walk->vma, start, end);
>>
>> It appears that we can increase "unmapped" only if ptep_get_and_clear()
>> is used?
>
> In other words you mean we only need to increase unmapped if pte_present
> && !anon_exclusive?
>
> Agree, that's a good optimisation to make. However I'm just trying to
> solve a data corruption issue (not dirtying the page) here, so will post
> that as a separate optimisation patch. Thanks.
OK. Then the patch looks good to me. Feel free to add my,
Reviewed-by: "Huang, Ying" <ying.huang@...el.com>
Best Regards,
Huang, Ying
>>
>>> + arch_leave_lazy_mmu_mode();
>>> + pte_unmap_unlock(ptep - 1, ptl);
>>> +
>>> return 0;
>>> }
>>>
>>>
>>> base-commit: ffcf9c5700e49c0aee42dcba9a12ba21338e8136
Powered by blists - more mailing lists