[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f9eb630a-f0f8-4219-b74f-109c51f31eb4@arm.com>
Date: Tue, 23 Jan 2024 20:43:53 +0000
From: Ryan Roberts <ryan.roberts@....com>
To: David Hildenbrand <david@...hat.com>, linux-kernel@...r.kernel.org
Cc: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
Matthew Wilcox <willy@...radead.org>, Russell King <linux@...linux.org.uk>,
Catalin Marinas <catalin.marinas@....com>, Will Deacon <will@...nel.org>,
Dinh Nguyen <dinguyen@...nel.org>, Michael Ellerman <mpe@...erman.id.au>,
Nicholas Piggin <npiggin@...il.com>,
Christophe Leroy <christophe.leroy@...roup.eu>,
"Aneesh Kumar K.V" <aneesh.kumar@...nel.org>,
"Naveen N. Rao" <naveen.n.rao@...ux.ibm.com>,
Paul Walmsley <paul.walmsley@...ive.com>, Palmer Dabbelt
<palmer@...belt.com>, Albert Ou <aou@...s.berkeley.edu>,
Alexander Gordeev <agordeev@...ux.ibm.com>,
Gerald Schaefer <gerald.schaefer@...ux.ibm.com>,
Heiko Carstens <hca@...ux.ibm.com>, Vasily Gorbik <gor@...ux.ibm.com>,
Christian Borntraeger <borntraeger@...ux.ibm.com>,
Sven Schnelle <svens@...ux.ibm.com>, "David S. Miller"
<davem@...emloft.net>, linux-arm-kernel@...ts.infradead.org,
linuxppc-dev@...ts.ozlabs.org, linux-riscv@...ts.infradead.org,
linux-s390@...r.kernel.org, sparclinux@...r.kernel.org
Subject: Re: [PATCH v1 00/11] mm/memory: optimize fork() with PTE-mapped THP
On 23/01/2024 20:14, David Hildenbrand wrote:
> On 23.01.24 20:43, Ryan Roberts wrote:
>> On 23/01/2024 19:33, David Hildenbrand wrote:
>>> On 23.01.24 20:15, Ryan Roberts wrote:
>>>> On 22/01/2024 19:41, David Hildenbrand wrote:
>>>>> Now that the rmap overhaul[1] is upstream that provides a clean interface
>>>>> for rmap batching, let's implement PTE batching during fork when processing
>>>>> PTE-mapped THPs.
>>>>>
>>>>> This series is partially based on Ryan's previous work[2] to implement
>>>>> cont-pte support on arm64, but its a complete rewrite based on [1] to
>>>>> optimize all architectures independent of any such PTE bits, and to
>>>>> use the new rmap batching functions that simplify the code and prepare
>>>>> for further rmap accounting changes.
>>>>>
>>>>> We collect consecutive PTEs that map consecutive pages of the same large
>>>>> folio, making sure that the other PTE bits are compatible, and (a) adjust
>>>>> the refcount only once per batch, (b) call rmap handling functions only
>>>>> once per batch and (c) perform batch PTE setting/updates.
>>>>>
>>>>> While this series should be beneficial for adding cont-pte support on
>>>>> ARM64[2], it's one of the requirements for maintaining a total mapcount[3]
>>>>> for large folios with minimal added overhead and further changes[4] that
>>>>> build up on top of the total mapcount.
>>>>
>>>> I'm currently rebasing my contpte work onto this series, and have hit a
>>>> problem.
>>>> I need to expose the "size" of a pte (pte_size()) and skip forward to the start
>>>> of the next (cont)pte every time through the folio_pte_batch() loop. But
>>>> pte_next_pfn() only allows advancing by 1 pfn; I need to advance by nr pfns:
>>>>
>>>>
>>>> static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
>>>> pte_t *start_ptep, pte_t pte, int max_nr, bool *any_writable)
>>>> {
>>>> unsigned long folio_end_pfn = folio_pfn(folio) + folio_nr_pages(folio);
>>>> const pte_t *end_ptep = start_ptep + max_nr;
>>>> pte_t expected_pte = __pte_batch_clear_ignored(pte_next_pfn(pte));
>>>> - pte_t *ptep = start_ptep + 1;
>>>> + pte_t *ptep = start_ptep;
>>>> + int vfn, nr, i;
>>>> bool writable;
>>>>
>>>> if (any_writable)
>>>> *any_writable = false;
>>>>
>>>> VM_WARN_ON_FOLIO(!pte_present(pte), folio);
>>>>
>>>> + vfn = addr >> PAGE_SIZE;
>>>> + nr = pte_size(pte);
>>>> + nr = ALIGN_DOWN(vfn + nr, nr) - vfn;
>>>> + ptep += nr;
>>>> +
>>>> while (ptep != end_ptep) {
>>>> + pte = ptep_get(ptep);
>>>> nr = pte_size(pte);
>>>> if (any_writable)
>>>> writable = !!pte_write(pte);
>>>> pte = __pte_batch_clear_ignored(pte);
>>>>
>>>> if (!pte_same(pte, expected_pte))
>>>> break;
>>>>
>>>> /*
>>>> * Stop immediately once we reached the end of the folio. In
>>>> * corner cases the next PFN might fall into a different
>>>> * folio.
>>>> */
>>>> - if (pte_pfn(pte) == folio_end_pfn)
>>>> + if (pte_pfn(pte) >= folio_end_pfn)
>>>> break;
>>>>
>>>> if (any_writable)
>>>> *any_writable |= writable;
>>>>
>>>> - expected_pte = pte_next_pfn(expected_pte);
>>>> - ptep++;
>>>> + for (i = 0; i < nr; i++)
>>>> + expected_pte = pte_next_pfn(expected_pte);
>>>> + ptep += nr;
>>>> }
>>>>
>>>> return ptep - start_ptep;
>>>> }
>>>>
>>>>
>>>> So I'm wondering if instead of enabling pte_next_pfn() for all the arches,
>>>> perhaps its actually better to expose pte_pgprot() for all the arches. Then we
>>>> can be much more flexible about generating ptes with pfn_pte(pfn, pgprot).
>>>>
>>>> What do you think?
>>>
>>> The pte_pgprot() stuff is just nasty IMHO.
>>
>> I dunno; we have pfn_pte() which takes a pfn and a pgprot. It seems reasonable
>> that we should be able to do the reverse.
>
> But pte_pgprot() is only available on a handful of architectures, no? It would
> be nice to have a completely generic pte_next_pfn() / pte_advance_pfns(), though.
>
> Anyhow, this is all "easy" to rework later. Unless I am missing something, the
> low hanging fruit is simply using PFN_PTE_SHIFT for now that exists on most
> archs already.
>
>>
>>>
>>> Likely it's best to simply convert pte_next_pfn() to something like
>>> pte_advance_pfns(). The we could just have
>>>
>>> #define pte_next_pfn(pte) pte_advance_pfns(pte, 1)
>>>
>>> That should be fairly easy to do on top (based on PFN_PTE_SHIFT). And only 3
>>> archs (x86-64, arm64, and powerpc) need slight care to replace a hardcoded "1"
>>> by an integer we pass in.
>>
>> I thought we agreed powerpc was safe to just define PFN_PTE_SHIFT? But, yeah,
>> the principle works I guess. I guess I can do this change along with my series.
>
> It is, if nobody insists on that micro-optimization on powerpc.
>
> If there is good reason to invest more time and effort right now on the
> pte_pgprot approach, then please let me know :)
>
No I think you're right. I thought pte_pgprot() was implemented by more arches,
but there are 13 without it, so clearly a lot of effort to plug that gap. I'll
take the approach you suggest with pte_advance_pfns(). It'll just require mods
to x86 and arm64, +/- ppc.
Powered by blists - more mailing lists