[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <58e63c5e-e043-4651-bf6d-9fc52f78cce6@lucifer.local>
Date: Sun, 27 Aug 2023 10:09:21 +0100
From: Lorenzo Stoakes <lstoakes@...il.com>
To: "Joel Fernandes (Google)" <joel@...lfernandes.org>
Cc: linux-kernel@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
linux-kselftest@...r.kernel.org, linux-mm@...ck.org,
Shuah Khan <shuah@...nel.org>,
Vlastimil Babka <vbabka@...e.cz>,
Michal Hocko <mhocko@...e.com>,
Kirill A Shutemov <kirill@...temov.name>,
"Liam R. Howlett" <liam.howlett@...cle.com>,
"Paul E. McKenney" <paulmck@...nel.org>,
Suren Baghdasaryan <surenb@...gle.com>,
Kalesh Singh <kaleshsingh@...gle.com>,
Lokesh Gidra <lokeshgidra@...gle.com>
Subject: Re: [PATCH v5 1/7] mm/mremap: Optimize the start addresses in
move_page_tables()
On Tue, Aug 22, 2023 at 01:54:54AM +0000, Joel Fernandes (Google) wrote:
> Recently, we see reports [1] of a warning that triggers due to
> move_page_tables() doing a downward and overlapping move on a
> mutually-aligned offset within a PMD. By mutual alignment, I
> mean the source and destination addresses of the mremap are at
> the same offset within a PMD.
>
> This mutual alignment along with the fact that the move is downward is
> sufficient to cause a warning related to having an allocated PMD that
> does not have PTEs in it.
>
> This warning will only trigger when there is mutual alignment in the
> move operation. A solution, as suggested by Linus Torvalds [2], is to
> initiate the copy process at the PMD level whenever such alignment is
> present. Implementing this approach will not only prevent the warning
> from being triggered, but it will also optimize the operation as this
> method should enhance the speed of the copy process whenever there's a
> possibility to start copying at the PMD level.
>
> Some more points:
> a. The optimization can be done only when both the source and
> destination of the mremap do not have anything mapped below it up to a
> PMD boundary. I add support to detect that.
>
> b. #1 is not a problem for the call to move_page_tables() from exec.c as
> nothing is expected to be mapped below the source. However, for
> non-overlapping mutually aligned moves as triggered by mremap(2), I
> added support for checking such cases.
>
> c. I currently only optimize for PMD moves, in the future I/we can build
> on this work and do PUD moves as well if there is a need for this. But I
> want to take it one step at a time.
>
> d. We need to be careful about mremap of ranges within the VMA itself.
> For this purpose, I added checks to determine if the address after
> alignment falls within its VMA itself.
>
> [1] https://lore.kernel.org/all/ZB2GTBD%2FLWTrkOiO@dhcp22.suse.cz/
> [2] https://lore.kernel.org/all/CAHk-=whd7msp8reJPfeGNyt0LiySMT0egExx3TVZSX3Ok6X=9g@mail.gmail.com/
>
> Suggested-by: Linus Torvalds <torvalds@...ux-foundation.org>
> Signed-off-by: Joel Fernandes (Google) <joel@...lfernandes.org>
> ---
> mm/mremap.c | 62 +++++++++++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 62 insertions(+)
>
> diff --git a/mm/mremap.c b/mm/mremap.c
> index 11e06e4ab33b..035fbf542a8f 100644
> --- a/mm/mremap.c
> +++ b/mm/mremap.c
> @@ -489,6 +489,53 @@ static bool move_pgt_entry(enum pgt_entry entry, struct vm_area_struct *vma,
> return moved;
> }
>
> +/*
> + * A helper to check if a previous mapping exists. Required for
> + * move_page_tables() and realign_addr() to determine if a previous mapping
> + * exists before we can do realignment optimizations.
> + */
> +static bool can_align_down(struct vm_area_struct *vma, unsigned long addr_to_align,
> + unsigned long mask)
> +{
> + unsigned long addr_masked = addr_to_align & mask;
> +
> + /*
> + * If @addr_to_align of either source or destination is not the beginning
> + * of the corresponding VMA, we can't align down or we will destroy part
> + * of the current mapping.
> + */
> + if (vma->vm_start != addr_to_align)
> + return false;
> +
> + /*
> + * Make sure the realignment doesn't cause the address to fall on an
> + * existing mapping.
> + */
> + return find_vma_intersection(vma->vm_mm, addr_masked, addr_to_align) == NULL;
> +}
> +
> +/* Opportunistically realign to specified boundary for faster copy. */
> +static void try_realign_addr(unsigned long *old_addr, struct vm_area_struct *old_vma,
> + unsigned long *new_addr, struct vm_area_struct *new_vma,
> + unsigned long mask)
> +{
> + /* Skip if the addresses are already aligned. */
> + if ((*old_addr & ~mask) == 0)
> + return;
> +
> + /* Only realign if the new and old addresses are mutually aligned. */
> + if ((*old_addr & ~mask) != (*new_addr & ~mask))
> + return;
> +
> + /* Ensure realignment doesn't cause overlap with existing mappings. */
> + if (!can_align_down(old_vma, *old_addr, mask) ||
> + !can_align_down(new_vma, *new_addr, mask))
> + return;
> +
> + *old_addr = *old_addr & mask;
> + *new_addr = *new_addr & mask;
> +}
> +
> unsigned long move_page_tables(struct vm_area_struct *vma,
> unsigned long old_addr, struct vm_area_struct *new_vma,
> unsigned long new_addr, unsigned long len,
> @@ -508,6 +555,14 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
> return move_hugetlb_page_tables(vma, new_vma, old_addr,
> new_addr, len);
>
> + /*
> + * If possible, realign addresses to PMD boundary for faster copy.
> + * Only realign if the mremap copying hits a PMD boundary.
> + */
> + if ((vma != new_vma)
> + && (len >= PMD_SIZE - (old_addr & ~PMD_MASK)))
> + try_realign_addr(&old_addr, vma, &new_addr, new_vma, PMD_MASK);
> +
> flush_cache_range(vma, old_addr, old_end);
> mmu_notifier_range_init(&range, MMU_NOTIFY_UNMAP, 0, vma->vm_mm,
> old_addr, old_end);
> @@ -577,6 +632,13 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
>
> mmu_notifier_invalidate_range_end(&range);
>
> + /*
> + * Prevent negative return values when {old,new}_addr was realigned
> + * but we broke out of the above loop for the first PMD itself.
> + */
> + if (len + old_addr < old_end)
> + return 0;
> +
> return len + old_addr - old_end; /* how much done */
> }
>
> --
> 2.42.0.rc1.204.g551eb34607-goog
>
Looks good to me! Thanks for the changes :)
Reviewed-by: Lorenzo Stoakes <lstoakes@...il.com>
Powered by blists - more mailing lists