[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0d20d8af-e480-4eb8-8606-1e486b13fd7e@redhat.com>
Date: Mon, 6 May 2024 10:06:02 +0200
From: David Hildenbrand <david@...hat.com>
To: Barry Song <21cnbao@...il.com>, Ryan Roberts <ryan.roberts@....com>
Cc: akpm@...ux-foundation.org, linux-mm@...ck.org,
baolin.wang@...ux.alibaba.com, chrisl@...nel.org, hanchuanhua@...o.com,
hannes@...xchg.org, hughd@...gle.com, kasong@...cent.com,
linux-kernel@...r.kernel.org, surenb@...gle.com, v-songbaohua@...o.com,
willy@...radead.org, xiang@...nel.org, ying.huang@...el.com,
yosryahmed@...gle.com, yuzhao@...gle.com, ziy@...dia.com
Subject: Re: [PATCH v3 3/6] mm: introduce pte_move_swp_offset() helper which
can move offset bidirectionally
On 04.05.24 01:40, Barry Song wrote:
> On Fri, May 3, 2024 at 5:41 PM Ryan Roberts <ryan.roberts@....com> wrote:
>>
>> On 03/05/2024 01:50, Barry Song wrote:
>>> From: Barry Song <v-songbaohua@...o.com>
>>>
>>> There could arise a necessity to obtain the first pte_t from a swap
>>> pte_t located in the middle. For instance, this may occur within the
>>> context of do_swap_page(), where a page fault can potentially occur in
>>> any PTE of a large folio. To address this, the following patch introduces
>>> pte_move_swp_offset(), a function capable of bidirectional movement by
>>> a specified delta argument. Consequently, pte_increment_swp_offset()
>>
>> You mean pte_next_swp_offset()?
>
> yes.
>
>>
>>> will directly invoke it with delta = 1.
>>>
>>> Suggested-by: "Huang, Ying" <ying.huang@...el.com>
>>> Signed-off-by: Barry Song <v-songbaohua@...o.com>
>>> ---
>>> mm/internal.h | 25 +++++++++++++++++++++----
>>> 1 file changed, 21 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/mm/internal.h b/mm/internal.h
>>> index c5552d35d995..cfe4aed66a5c 100644
>>> --- a/mm/internal.h
>>> +++ b/mm/internal.h
>>> @@ -211,18 +211,21 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
>>> }
>>>
>>> /**
>>> - * pte_next_swp_offset - Increment the swap entry offset field of a swap pte.
>>> + * pte_move_swp_offset - Move the swap entry offset field of a swap pte
>>> + * forward or backward by delta
>>> * @pte: The initial pte state; is_swap_pte(pte) must be true and
>>> * non_swap_entry() must be false.
>>> + * @delta: The direction and the offset we are moving; forward if delta
>>> + * is positive; backward if delta is negative
>>> *
>>> - * Increments the swap offset, while maintaining all other fields, including
>>> + * Moves the swap offset, while maintaining all other fields, including
>>> * swap type, and any swp pte bits. The resulting pte is returned.
>>> */
>>> -static inline pte_t pte_next_swp_offset(pte_t pte)
>>> +static inline pte_t pte_move_swp_offset(pte_t pte, long delta)
>>
>> We have equivalent functions for pfn:
>>
>> pte_next_pfn()
>> pte_advance_pfn()
>>
>> Although the latter takes an unsigned long and only moves forward currently. I
>> wonder if it makes sense to have their naming and semantics match? i.e. change
>> pte_advance_pfn() to pte_move_pfn() and let it move backwards too.
>>
>> I guess we don't have a need for that and it adds more churn.
>
> we might have a need in the below case.
> A forks B, then A and B share large folios. B unmap/exit, then large
> folios of process
> A become single-mapped.
> Right now, while writing A's folios, we are CoWing A's large folios
> into many small
> folios. I believe we can reuse the entire large folios instead of doing nr_pages
> CoW and page faults.
> In this case, we might want to get the first PTE from vmf->pte.
Once we have COW reuse for large folios in place (I think you know that
I am working on that), it might make sense to "COW-reuse around",
meaning we look if some neighboring PTEs map the same large folio and
map them writable as well. But if it's really worth it, increasing page
fault latency, is to be decided separately.
>
> Another case, might be
> A forks B, and we write either A or B, we might CoW an entire large
> folios instead
> CoWing nr_pages small folios.
>
> case 1 seems more useful, I might have a go after some days. then we might
> see pte_move_pfn().
pte_move_pfn() does sound odd to me. It might not be required to
implement the optimization described above. (it's easier to simply read
another PTE, check if it maps the same large folio, and to batch from there)
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists