[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240322134818.9b312f77629f79fcf1564b6f@linux-foundation.org>
Date: Fri, 22 Mar 2024 13:48:18 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: peterx@...hat.com
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
linuxppc-dev@...ts.ozlabs.org, Michael Ellerman <mpe@...erman.id.au>,
Christophe Leroy <christophe.leroy@...roup.eu>, Matthew Wilcox
<willy@...radead.org>, Rik van Riel <riel@...riel.com>, Lorenzo Stoakes
<lstoakes@...il.com>, Axel Rasmussen <axelrasmussen@...gle.com>, Yang Shi
<shy828301@...il.com>, John Hubbard <jhubbard@...dia.com>,
linux-arm-kernel@...ts.infradead.org, "Kirill A . Shutemov"
<kirill@...temov.name>, Andrew Jones <andrew.jones@...ux.dev>, Vlastimil
Babka <vbabka@...e.cz>, Mike Rapoport <rppt@...nel.org>, Muchun Song
<muchun.song@...ux.dev>, Christoph Hellwig <hch@...radead.org>,
linux-riscv@...ts.infradead.org, James Houghton <jthoughton@...gle.com>,
David Hildenbrand <david@...hat.com>, Jason Gunthorpe <jgg@...dia.com>,
Andrea Arcangeli <aarcange@...hat.com>, "Aneesh Kumar K . V"
<aneesh.kumar@...nel.org>, Mike Kravetz <mike.kravetz@...cle.com>
Subject: Re: [PATCH v3 12/12] mm/gup: Handle hugetlb in the generic
follow_page_mask code
On Thu, 21 Mar 2024 18:08:02 -0400 peterx@...hat.com wrote:
> From: Peter Xu <peterx@...hat.com>
>
> Now follow_page() is ready to handle hugetlb pages in whatever form, and
> over all architectures. Switch to the generic code path.
>
> Time to retire hugetlb_follow_page_mask(), following the previous
> retirement of follow_hugetlb_page() in 4849807114b8.
>
> There may be a slight difference of how the loops run when processing slow
> GUP over a large hugetlb range on cont_pte/cont_pmd supported archs: each
> loop of __get_user_pages() will resolve one pgtable entry with the patch
> applied, rather than relying on the size of hugetlb hstate, the latter may
> cover multiple entries in one loop.
>
> A quick performance test on an aarch64 VM on M1 chip shows 15% degrade over
> a tight loop of slow gup after the path switched. That shouldn't be a
> problem because slow-gup should not be a hot path for GUP in general: when
> page is commonly present, fast-gup will already succeed, while when the
> page is indeed missing and require a follow up page fault, the slow gup
> degrade will probably buried in the fault paths anyway. It also explains
> why slow gup for THP used to be very slow before 57edfcfd3419 ("mm/gup:
> accelerate thp gup even for "pages != NULL"") lands, the latter not part of
> a performance analysis but a side benefit. If the performance will be a
> concern, we can consider handle CONT_PTE in follow_page().
>
> Before that is justified to be necessary, keep everything clean and simple.
>
mm/gup.c:33:21: warning: 'follow_hugepd' declared 'static' but never defined [-Wunused-function]
33 | static struct page *follow_hugepd(struct vm_area_struct *vma, hugepd_t hugepd,
| ^~~~~~~~~~~~~
--- a/mm/gup.c~mm-gup-handle-hugepd-for-follow_page-fix
+++ a/mm/gup.c
@@ -30,10 +30,12 @@ struct follow_page_context {
unsigned int page_mask;
};
+#ifdef CONFIG_HAVE_FAST_GUP
static struct page *follow_hugepd(struct vm_area_struct *vma, hugepd_t hugepd,
unsigned long addr, unsigned int pdshift,
unsigned int flags,
struct follow_page_context *ctx);
+#endif
static inline void sanity_check_pinned_pages(struct page **pages,
unsigned long npages)
_
This looks inelegant.
That's two build issues so far. Please be more expansive in the
Kconfig variations when testing. Especially when mucking with pgtable
macros.
Powered by blists - more mailing lists