lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f9b7c816-b0a4-4e9c-a5c6-0cc5990e56d2@lucifer.local>
Date: Fri, 5 Jul 2024 18:02:37 +0100
From: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
To: "Liam R. Howlett" <Liam.Howlett@...cle.com>
Cc: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
        Suren Baghdasaryan <surenb@...gle.com>,
        Vlastimil Babka <vbabka@...e.cz>, Matthew Wilcox <willy@...radead.org>,
        sidhartha.kumar@...cle.com, "Paul E . McKenney" <paulmck@...nel.org>,
        Bert Karwatzki <spasswolf@....de>, Jiri Olsa <olsajiri@...il.com>,
        linux-kernel@...r.kernel.org, Kees Cook <kees@...nel.org>
Subject: Re: [PATCH v3 02/16] mm/mmap: Introduce abort_munmap_vmas()

On Thu, Jul 04, 2024 at 02:27:04PM GMT, Liam R. Howlett wrote:
> Extract clean up of failed munmap() operations from
> do_vmi_align_munmap().  This simplifies later patches in the series.
>
> Signed-off-by: Liam R. Howlett <Liam.Howlett@...cle.com>
> ---
>  mm/mmap.c | 25 ++++++++++++++++++++-----
>  1 file changed, 20 insertions(+), 5 deletions(-)
>
> diff --git a/mm/mmap.c b/mm/mmap.c
> index 28a46d9ddde0..d572e1ff8255 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -2586,6 +2586,25 @@ struct vm_area_struct *vma_merge_extend(struct vma_iterator *vmi,
>  			 vma->vm_userfaultfd_ctx, anon_vma_name(vma));
>  }
>
> +/*
> + * abort_munmap_vmas - Undo any munmap work and free resources
> + *
> + * Reattach detached vmas, free up maple tree used to track the vmas.
> + */
> +static inline void abort_munmap_vmas(struct ma_state *mas_detach)
> +{
> +	struct vm_area_struct *vma;
> +	int limit;
> +
> +	limit = mas_detach->index;

This feels like a change to existing behaviour actually, I mean a sensible
one - as you are not just walking the tree start-to-end but rather only
walking up to the point that it has been populated (assuming I'm not
missing anything, looks to me like mas_for_each is _inclusive_ on max).

Maybe  worth mentioning in commit msg?

> +	mas_set(mas_detach, 0);
> +	/* Re-attach any detached VMAs */
> +	mas_for_each(mas_detach, vma, limit)
> +		vma_mark_detached(vma, false);
> +
> +	__mt_destroy(mas_detach->tree);
> +}
> +
>  /*
>   * do_vmi_align_munmap() - munmap the aligned region from @start to @end.
>   * @vmi: The vma iterator
> @@ -2740,11 +2759,7 @@ do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
>  userfaultfd_error:
>  munmap_gather_failed:
>  end_split_failed:
> -	mas_set(&mas_detach, 0);
> -	mas_for_each(&mas_detach, next, end)
> -		vma_mark_detached(next, false);
> -
> -	__mt_destroy(&mt_detach);
> +	abort_munmap_vmas(&mas_detach);
>  start_split_failed:
>  map_count_exceeded:
>  	validate_mm(mm);
> --
> 2.43.0
>

This looks fine though, feel free to add:

Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ