[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87r0gp868d.fsf@yhuang6-desk2.ccr.corp.intel.com>
Date: Tue, 05 Mar 2024 15:53:22 +0800
From: "Huang, Ying" <ying.huang@...el.com>
To: Barry Song <21cnbao@...il.com>
Cc: David Hildenbrand <david@...hat.com>, Ryan Roberts
<ryan.roberts@....com>, akpm@...ux-foundation.org, linux-mm@...ck.org,
chrisl@...nel.org, yuzhao@...gle.com, hanchuanhua@...o.com,
linux-kernel@...r.kernel.org, willy@...radead.org, xiang@...nel.org,
mhocko@...e.com, shy828301@...il.com, wangkefeng.wang@...wei.com,
Barry Song <v-songbaohua@...o.com>, Hugh Dickins <hughd@...gle.com>
Subject: Re: [RFC PATCH] mm: hold PTL from the first PTE while reclaiming a
large folio
Barry Song <21cnbao@...il.com> writes:
> On Tue, Mar 5, 2024 at 10:15 AM David Hildenbrand <david@...hat.com> wrote:
>> > But we did "resolve" those bugs by entirely untouching all PTEs if we
>> > found some PTEs were skipped in try_to_unmap_one [1].
>> >
>> > While we find we only get the PTL from 2nd, 3rd but not
>> > 1st PTE, we entirely give up on try_to_unmap_one, and leave
>> > all PTEs untouched.
>> >
>> > /* we are not starting from head */
>> > if (!IS_ALIGNED((unsigned long)pvmw.pte, CONT_PTES * sizeof(*pvmw.pte))) {
>> > ret = false;
>> > atomic64_inc(&perf_stat.mapped_walk_start_from_non_head);
>> > set_pte_at(mm, address, pvmw.pte, pteval);
>> > page_vma_mapped_walk_done(&pvmw);
>> > break;
>> > }
>> > This will ensure all PTEs still have a unified state such as CONT-PTE
>> > after try_to_unmap fails.
>> > I feel this could have some false postive because when racing
>> > with unmap, 1st PTE might really become pte_none. So explicitly
>> > holding PTL from 1st PTE seems a better way.
>>
>> Can we estimate the "cost" of holding the PTL?
>>
>
> This is just moving PTL acquisition one or two PTE earlier in those corner
> cases. In normal cases, it doesn't affect when PTL is held.
The mTHP may be mapped at the end of page table. In that case, the PTL
will be held longer. Or am I missing something?
--
Best Regards,
Huang, Ying
> In normal cases, page_vma_mapped_walk will find PTE0 is present, thus hold
> PTL immediately. in corner cases, page_vma_mapped_walk races with break-
> before-make, after skipping one or two PTEs whose states are transferring,
> it will find a present pte then acquire lock.
>
>> --
>> Cheers,
>>
>> David / dhildenb
>
> Thanks
> Barry
Powered by blists - more mailing lists