[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <5e3bebd2-9144-ed51-9e57-36da0a5de3fd@huaweicloud.com>
Date: Thu, 6 Jul 2023 10:37:08 +0800
From: Kemeng Shi <shikemeng@...weicloud.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] mm: remove redundant check in page_vma_mapped_walk
on 7/5/2023 1:05 AM, Andrew Morton wrote:
> On Wed, 5 Jul 2023 05:39:31 +0800 Kemeng Shi <shikemeng@...weicloud.com> wrote:
>
>> For PVMW_SYNC case, we always take pte lock when get first pte of
>> PTE-mapped THP in map_pte and hold it until:
>> 1. scan of pmd range finished or
>> 2. scan of user input range finished or
>> 3. user stop walk with page_vma_mapped_walk_done.
>> In each case. pte lock will not be freed during middle scan of PTE-mapped
>> THP.
>>
>> ...
>>
>> --- a/mm/page_vma_mapped.c
>> +++ b/mm/page_vma_mapped.c
>> @@ -275,10 +275,6 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
>> goto restart;
>> }
>> pvmw->pte++;
>> - if ((pvmw->flags & PVMW_SYNC) && !pvmw->ptl) {
>> - pvmw->ptl = pte_lockptr(mm, pvmw->pmd);
>> - spin_lock(pvmw->ptl);
>> - }
>> } while (pte_none(*pvmw->pte));
>>
>> if (!pvmw->ptl) {
>
> This code has changed significantly since 6.4. Please develop against
> the mm-unstable branch at
> git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm, thanks.
>
>
Thanks for reminding me of this, I will check my changes in updated code.
--
Best wishes
Kemeng Shi
Powered by blists - more mailing lists