[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230628215310.73782-1-peterx@redhat.com>
Date: Wed, 28 Jun 2023 17:53:02 -0400
From: Peter Xu <peterx@...hat.com>
To: linux-mm@...ck.org, linux-kernel@...r.kernel.org
Cc: David Hildenbrand <david@...hat.com>,
"Kirill A . Shutemov" <kirill@...temov.name>,
Andrew Morton <akpm@...ux-foundation.org>,
Andrea Arcangeli <aarcange@...hat.com>,
Mike Rapoport <rppt@...nel.org>,
John Hubbard <jhubbard@...dia.com>,
Matthew Wilcox <willy@...radead.org>,
Mike Kravetz <mike.kravetz@...cle.com>,
Vlastimil Babka <vbabka@...e.cz>,
Yang Shi <shy828301@...il.com>,
James Houghton <jthoughton@...gle.com>,
Jason Gunthorpe <jgg@...dia.com>,
Lorenzo Stoakes <lstoakes@...il.com>,
Hugh Dickins <hughd@...gle.com>, peterx@...hat.com
Subject: [PATCH v4 0/8] mm/gup: Unify hugetlb, speed up thp
v1: https://lore.kernel.org/r/20230613215346.1022773-1-peterx@redhat.com
v2: https://lore.kernel.org/r/20230619231044.112894-1-peterx@redhat.com
v3: https://lore.kernel.org/r/20230623142936.268456-1-peterx@redhat.com
v4:
- Patch 2: check pte write for unsharing [David]
- Added more tags, rebased to akpm/mm-unstable
Hugetlb has a special path for slow gup that follow_page_mask() is actually
skipped completely along with faultin_page(). It's not only confusing, but
also duplicating a lot of logics that generic gup already has, making
hugetlb slightly special.
This patchset tries to dedup the logic, by first touching up the slow gup
code to be able to handle hugetlb pages correctly with the current follow
page and faultin routines (where we're mostly there.. due to 10 years ago
we did try to optimize thp, but half way done; more below), then at the
last patch drop the special path, then the hugetlb gup will always go the
generic routine too via faultin_page().
Note that hugetlb is still special for gup, mostly due to the pgtable
walking (hugetlb_walk()) that we rely on which is currently per-arch. But
this is still one small step forward, and the diffstat might be a proof
too that this might be worthwhile.
Then for the "speed up thp" side: as a side effect, when I'm looking at the
chunk of code, I found that thp support is actually partially done. It
doesn't mean that thp won't work for gup, but as long as **pages pointer
passed over, the optimization will be skipped too. Patch 6 should address
that, so for thp we now get full speed gup.
For a quick number, "chrt -f 1 ./gup_test -m 512 -t -L -n 1024 -r 10" gives
me 13992.50us -> 378.50us. Gup_test is an extreme case, but just to show
how it affects thp gups.
Patch 1-5: prepares for the switch
Patch 6: switchover to the new code and remove the old
Patch 7-8: added some gup test matrix into run_vmtests.sh
Please review, thanks.
Peter Xu (8):
mm/hugetlb: Handle FOLL_DUMP well in follow_page_mask()
mm/hugetlb: Prepare hugetlb_follow_page_mask() for FOLL_PIN
mm/hugetlb: Add page_mask for hugetlb_follow_page_mask()
mm/gup: Cleanup next_page handling
mm/gup: Accelerate thp gup even for "pages != NULL"
mm/gup: Retire follow_hugetlb_page()
selftests/mm: Add -a to run_vmtests.sh
selftests/mm: Add gup test matrix in run_vmtests.sh
fs/userfaultfd.c | 2 +-
include/linux/hugetlb.h | 20 +-
mm/gup.c | 83 ++++---
mm/hugetlb.c | 265 +++-------------------
tools/testing/selftests/mm/run_vmtests.sh | 48 +++-
5 files changed, 126 insertions(+), 292 deletions(-)
--
2.41.0
Powered by blists - more mailing lists