[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3a5ed894-e9cd-4433-b82e-be4b049273e1@lucifer.local>
Date: Tue, 29 Jul 2025 16:48:32 +0100
From: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
To: Zhang Qilong <zhangqilong3@...wei.com>
Cc: arnd@...db.de, gregkh@...uxfoundation.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, wangkefeng.wang@...wei.com, sunnanyong@...wei.com
Subject: Re: [PATCH] /dev/zero: try to align PMD_SIZE for private mapping
On Tue, Jul 29, 2025 at 09:49:41PM +0800, Zhang Qilong wrote:
> By default, THP are usually enabled. Mapping /dev/zero with a size
Err... we can't rely on this.
As per below comments on code, I'd update this to say something about fallback
if it's not.
> larger than 2MB could achieve performance gains by allocating aligned
> address. The mprot_tw4m in libMicro average execution time on arm64:
> - Test case: mprot_tw4m
> - Before the patch: 22 us
> - After the patch: 17 us
>
> Signed-off-by: Zhang Qilong <zhangqilong3@...wei.com>
This looks ok to me because there's a precedent for using
thp_get_unmapped_area() directly as a file_operations->get_unmapped_area e.g. in
ext4.
We also simply (amusingly, or perhaps not hugely amusingly, rather 'uniquely')
establish an anonymous mapping on f_op->mmap via mmap_zero() using
vma_set_anonymous(), so we can rely on the standard anon page memory faulting
logic to sort out the actual allocation/mapping of the huge page via:
__handle_mm_fault() -> create_huge_pmd() -> do_huge_pmd_anonymous_page() etc.
So everything should 'just work', and fallback if not permitted.
So in general seems fine.
> ---
> drivers/char/mem.c | 5 +++++
> 1 file changed, 5 insertions(+)
>
> diff --git a/drivers/char/mem.c b/drivers/char/mem.c
> index 48839958b0b1..c57327ca9dd6 100644
> --- a/drivers/char/mem.c
> +++ b/drivers/char/mem.c
> @@ -515,10 +515,12 @@ static int mmap_zero(struct file *file, struct vm_area_struct *vma)
> static unsigned long get_unmapped_area_zero(struct file *file,
> unsigned long addr, unsigned long len,
> unsigned long pgoff, unsigned long flags)
> {
> #ifdef CONFIG_MMU
> + unsigned long ret;
> +
> if (flags & MAP_SHARED) {
> /*
> * mmap_zero() will call shmem_zero_setup() to create a file,
> * so use shmem's get_unmapped_area in case it can be huge;
> * and pass NULL for file as in mmap.c's get_unmapped_area(),
> @@ -526,10 +528,13 @@ static unsigned long get_unmapped_area_zero(struct file *file,
> */
> return shmem_get_unmapped_area(NULL, addr, len, pgoff, flags);
> }
>
> /* Otherwise flags & MAP_PRIVATE: with no shmem object beneath it */
Let's add a comment here like:
/*
* Attempt to map aligned to huge page size if possible, otherwise we
* fall back to system page size mappings. If THP is not enabled, this
* returns NULL and we always fallback.
*/
I think it'd be sensible to have an #ifdef CONFIG_TRANSPARENT_HUGEPAGE here,
because thp_get_unmapped_area() does the fallback for you, and then otherwise
we'd be trying it twice which is weird.
E.g.:
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
return thp_get_unmapped_area(file, addr, len, pgoff, flags);
#else
return mm_get_unmapped_area(current->mm, file, addr, len, pgoff, flags);
#endif
> + ret = thp_get_unmapped_area(file, addr, len, pgoff, flags);
> + if (ret)
> + return ret;
> return mm_get_unmapped_area(current->mm, file, addr, len, pgoff, flags);
> #else
> return -ENOSYS;
> #endif
> }
> --
> 2.43.0
>
In _theory_ we should do the thing in mmap() where we check the size is
PMD-aligned (see __get_unmapped_area()), but I don't think anybody's mapping a
bunch of /dev/zero mappings next to each other or using them in any way where
that'd matter... So yeah let's not :)
Powered by blists - more mailing lists