[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOUHufZypv+kLFu3r8iPYbceBh0KSE=gus-_iC1Q35_QVQdnMQ@mail.gmail.com>
Date: Tue, 4 Jul 2023 20:07:19 -0600
From: Yu Zhao <yuzhao@...gle.com>
To: Ryan Roberts <ryan.roberts@....com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Matthew Wilcox <willy@...radead.org>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Yin Fengwei <fengwei.yin@...el.com>,
David Hildenbrand <david@...hat.com>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Anshuman Khandual <anshuman.khandual@....com>,
Yang Shi <shy828301@...il.com>,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [PATCH v2 3/5] mm: Default implementation of arch_wants_pte_order()
On Tue, Jul 4, 2023 at 7:20 AM Ryan Roberts <ryan.roberts@....com> wrote:
>
> On 03/07/2023 20:50, Yu Zhao wrote:
> > On Mon, Jul 3, 2023 at 7:53 AM Ryan Roberts <ryan.roberts@....com> wrote:
> >>
> >> arch_wants_pte_order() can be overridden by the arch to return the
> >> preferred folio order for pte-mapped memory. This is useful as some
> >> architectures (e.g. arm64) can coalesce TLB entries when the physical
> >> memory is suitably contiguous.
> >>
> >> The first user for this hint will be FLEXIBLE_THP, which aims to
> >> allocate large folios for anonymous memory to reduce page faults and
> >> other per-page operation costs.
> >>
> >> Here we add the default implementation of the function, used when the
> >> architecture does not define it, which returns the order corresponding
> >> to 64K.
> >
> > I don't really mind a non-zero default value. But people would ask why
> > non-zero and why 64KB. Probably you could argue this is the large size
> > all known archs support if they have TLB coalescing. For x86, AMD CPUs
> > would want to override this. I'll leave it to Fengwei to decide
> > whether Intel wants a different default value.>
> > Also I don't like the vma parameter because it makes
> > arch_wants_pte_order() a mix of hw preference and vma policy. From my
> > POV, the function should be only about the former; the latter should
> > be decided by arch-independent MM code. However, I can live with it if
> > ARM MM people think this is really what you want. ATM, I'm skeptical
> > they do.
>
> Here's the big picture for what I'm tryng to achieve:
>
> - In the common case, I'd like all programs to get a performance bump by
> automatically and transparently using large anon folios - so no explicit
> requirement on the process to opt-in.
We all agree on this :)
> - On arm64, in the above case, I'd like the preferred folio size to be 64K;
> from the (admittedly limitted) testing I've done that's about where the
> performance knee is and it doesn't appear to increase the memory wastage very
> much. It also has the benefits that for 4K base pages this is the contpte size
> (order-4) so I can take full benefit of contpte mappings transparently to the
> process. And for 16K this is the HPA size (order-2).
My highest priority is to get 16KB proven first because it would
benefit both client and server devices. So it may be different from
yours but I don't see any conflict.
> - On arm64 when the process has marked the VMA for THP (or when
> transparent_hugepage=always) but the VMA does not meet the requirements for a
> PMD-sized mapping (or we failed to allocate, ...) then I'd like to map using
> contpte. For 4K base pages this is 64K (order-4), for 16K this is 2M (order-7)
> and for 64K this is 2M (order-5). The 64K base page case is very important since
> the PMD size for that base page is 512MB which is almost impossible to allocate
> in practice.
Which case (server or client) are you focusing on here? For our client
devices, I can confidently say that 64KB has to be after 16KB, if it
happens at all. For servers in general, I don't know of any major
memory-intensive workloads that are not THP-aware, i.e., I don't think
"VMA does not meet the requirements" is a concern.
Powered by blists - more mailing lists