[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAHS8izNaqqDuVuK_ME_NMHK2XyHqeuBwgJq0DY3s-tDRk05QhA@mail.gmail.com>
Date: Sun, 7 Nov 2021 15:51:57 -0800
From: Mina Almasry <almasrymina@...gle.com>
To: unlisted-recipients:; (no To-header on input)
Cc: David Hildenbrand <david@...hat.com>,
Matthew Wilcox <willy@...radead.org>,
"Paul E . McKenney" <paulmckrcu@...com>,
Yu Zhao <yuzhao@...gle.com>, Jonathan Corbet <corbet@....net>,
Andrew Morton <akpm@...ux-foundation.org>,
Peter Xu <peterx@...hat.com>,
Ivan Teterevkov <ivan.teterevkov@...anix.com>,
Florian Schmidt <florian.schmidt@...anix.com>,
linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [PATCH v3] mm: Add PM_HUGE_THP_MAPPING to /proc/pid/pagemap
On Sun, Nov 7, 2021 at 2:59 PM Mina Almasry <almasrymina@...gle.com> wrote:
>
> Add PM_HUGE_THP MAPPING to allow userspace to detect whether a given virt
> address is currently mapped by a transparent huge page or not.
>
> Example use case is a process requesting THPs from the kernel (via
> a huge tmpfs mount for example), for a performance critical region of
> memory. The userspace may want to query whether the kernel is actually
> backing this memory by hugepages or not.
>
> PM_HUGE_THP_MAPPING bit is set if the virt address is mapped at the PMD
> level and the underlying page is a transparent huge page.
>
> Tested manually by adding logging into transhuge-stress, and by
> allocating THP and querying the PM_HUGE_THP_MAPPING flag at those
> virtual addresses.
>
> Signed-off-by: Mina Almasry <almasrymina@...gle.com>
>
> Cc: David Hildenbrand <david@...hat.com>
> Cc: Matthew Wilcox <willy@...radead.org>
> Cc: David Rientjes rientjes@...gle.com
> Cc: Paul E. McKenney <paulmckrcu@...com>
> Cc: Yu Zhao <yuzhao@...gle.com>
> Cc: Jonathan Corbet <corbet@....net>
> Cc: Andrew Morton <akpm@...ux-foundation.org>
> Cc: Peter Xu <peterx@...hat.com>
> Cc: Ivan Teterevkov <ivan.teterevkov@...anix.com>
> Cc: Florian Schmidt <florian.schmidt@...anix.com>
> Cc: linux-kernel@...r.kernel.org
> Cc: linux-fsdevel@...r.kernel.org
> Cc: linux-mm@...ck.org
>
> ---
> Documentation/admin-guide/mm/pagemap.rst | 3 ++-
> fs/proc/task_mmu.c | 6 +++++-
> tools/testing/selftests/vm/transhuge-stress.c | 21 +++++++++++++++----
> 3 files changed, 24 insertions(+), 6 deletions(-)
>
> diff --git a/Documentation/admin-guide/mm/pagemap.rst b/Documentation/admin-guide/mm/pagemap.rst
> index fdc19fbc10839..8a0f0064ff336 100644
> --- a/Documentation/admin-guide/mm/pagemap.rst
> +++ b/Documentation/admin-guide/mm/pagemap.rst
> @@ -23,7 +23,8 @@ There are four components to pagemap:
> * Bit 56 page exclusively mapped (since 4.2)
> * Bit 57 pte is uffd-wp write-protected (since 5.13) (see
> :ref:`Documentation/admin-guide/mm/userfaultfd.rst <userfaultfd>`)
> - * Bits 57-60 zero
> + * Bit 58 page is a huge (PMD size) THP mapping
> + * Bits 59-60 zero
> * Bit 61 page is file-page or shared-anon (since 3.5)
> * Bit 62 page swapped
> * Bit 63 page present
> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> index ad667dbc96f5c..e10b59064c0b9 100644
> --- a/fs/proc/task_mmu.c
> +++ b/fs/proc/task_mmu.c
> @@ -1302,6 +1302,7 @@ struct pagemapread {
> #define PM_SOFT_DIRTY BIT_ULL(55)
> #define PM_MMAP_EXCLUSIVE BIT_ULL(56)
> #define PM_UFFD_WP BIT_ULL(57)
> +#define PM_HUGE_THP_MAPPING BIT_ULL(58)
> #define PM_FILE BIT_ULL(61)
> #define PM_SWAP BIT_ULL(62)
> #define PM_PRESENT BIT_ULL(63)
> @@ -1409,12 +1410,13 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end,
> struct pagemapread *pm = walk->private;
> spinlock_t *ptl;
> pte_t *pte, *orig_pte;
> + u64 flags = 0;
> int err = 0;
>
> #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> ptl = pmd_trans_huge_lock(pmdp, vma);
> if (ptl) {
> - u64 flags = 0, frame = 0;
> + u64 frame = 0;
Sorry again, I just noticed moving flags above is not necessary. I'll
upload v4 with a fix shortly.
> pmd_t pmd = *pmdp;
> struct page *page = NULL;
>
> @@ -1456,6 +1458,8 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end,
>
> if (page && page_mapcount(page) == 1)
> flags |= PM_MMAP_EXCLUSIVE;
> + if (page && is_transparent_hugepage(page))
> + flags |= PM_HUGE_THP_MAPPING;
>
> for (; addr != end; addr += PAGE_SIZE) {
> pagemap_entry_t pme = make_pme(frame, flags);
> diff --git a/tools/testing/selftests/vm/transhuge-stress.c b/tools/testing/selftests/vm/transhuge-stress.c
> index fd7f1b4a96f94..7dce18981fff5 100644
> --- a/tools/testing/selftests/vm/transhuge-stress.c
> +++ b/tools/testing/selftests/vm/transhuge-stress.c
> @@ -16,6 +16,12 @@
> #include <string.h>
> #include <sys/mman.h>
>
> +/*
> + * We can use /proc/pid/pagemap to detect whether the kernel was able to find
> + * hugepages or no. This can be very noisy, so is disabled by default.
> + */
> +#define NO_DETECT_HUGEPAGES
> +
> #define PAGE_SHIFT 12
> #define HPAGE_SHIFT 21
>
> @@ -23,6 +29,7 @@
> #define HPAGE_SIZE (1 << HPAGE_SHIFT)
>
> #define PAGEMAP_PRESENT(ent) (((ent) & (1ull << 63)) != 0)
> +#define PAGEMAP_THP(ent) (((ent) & (1ull << 58)) != 0)
> #define PAGEMAP_PFN(ent) ((ent) & ((1ull << 55) - 1))
>
> int pagemap_fd;
> @@ -47,10 +54,16 @@ int64_t allocate_transhuge(void *ptr)
> (uintptr_t)ptr >> (PAGE_SHIFT - 3)) != sizeof(ent))
> err(2, "read pagemap");
>
> - if (PAGEMAP_PRESENT(ent[0]) && PAGEMAP_PRESENT(ent[1]) &&
> - PAGEMAP_PFN(ent[0]) + 1 == PAGEMAP_PFN(ent[1]) &&
> - !(PAGEMAP_PFN(ent[0]) & ((1 << (HPAGE_SHIFT - PAGE_SHIFT)) - 1)))
> - return PAGEMAP_PFN(ent[0]);
> + if (PAGEMAP_PRESENT(ent[0]) && PAGEMAP_PRESENT(ent[1])) {
> +#ifndef NO_DETECT_HUGEPAGES
> + if (!PAGEMAP_THP(ent[0]))
> + fprintf(stderr, "WARNING: detected non THP page\n");
> +#endif
> + if (PAGEMAP_PFN(ent[0]) + 1 == PAGEMAP_PFN(ent[1]) &&
> + !(PAGEMAP_PFN(ent[0]) &
> + ((1 << (HPAGE_SHIFT - PAGE_SHIFT)) - 1)))
> + return PAGEMAP_PFN(ent[0]);
> + }
>
> return -1;
> }
> --
> 2.34.0.rc0.344.g81b53c2807-goog
Powered by blists - more mailing lists