lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c0ef6b6a-1c9b-4da2-a180-c8e1c73b1c28@lucifer.local>
Date: Tue, 27 Aug 2024 12:41:00 +0100
From: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
To: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        Andrew Morton <akpm@...ux-foundation.org>
Cc: "Liam R . Howlett" <Liam.Howlett@...cle.com>,
        Vlastimil Babka <vbabka@...e.cz>
Subject: Re: [PATCH v2 06/10] mm: avoid using vma_merge() for new VMAs

On Fri, Aug 23, 2024 at 09:07:01PM GMT, Lorenzo Stoakes wrote:

[snip]

>  void unlink_file_vma_batch_init(struct unlink_vma_file_batch *vb)
> @@ -1426,9 +1536,10 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
>  	struct vm_area_struct *vma = *vmap;
>  	unsigned long vma_start = vma->vm_start;
>  	struct mm_struct *mm = vma->vm_mm;
> -	struct vm_area_struct *new_vma, *prev;
> +	struct vm_area_struct *new_vma;
>  	bool faulted_in_anon_vma = true;
>  	VMA_ITERATOR(vmi, mm, addr);
> +	VMG_VMA_STATE(vmg, &vmi, NULL, vma, addr, addr + len);
>
>  	/*
>  	 * If anonymous vma has not yet been faulted, update new pgoff
> @@ -1439,11 +1550,18 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
>  		faulted_in_anon_vma = false;
>  	}
>
> -	new_vma = find_vma_prev(mm, addr, &prev);
> +	new_vma = find_vma_prev(mm, addr, &vmg.prev);
>  	if (new_vma && new_vma->vm_start < addr + len)
>  		return NULL;	/* should never get here */
>
> -	new_vma = vma_merge_new_vma(&vmi, prev, vma, addr, addr + len, pgoff);
> +	vmg.vma = NULL; /* New VMA range. */
> +	vmg.pgoff = pgoff;
> +	vmg.next = vma_next(&vmi);
> +	vma_prev(&vmi);
> +	vma_iter_next_range(&vmi);
> +
> +	new_vma = vma_merge_new_range(&vmg);
> +
>  	if (new_vma) {
>  		/*
>  		 * Source vma may have been merged into new_vma

[snip]

Hi Andrew - could you squash the attached fix-patch into this please? As
there is an issue with a CONFIG_DEBUG_VM check firing when copy_vma()
unnecessarily moves the VMA iterator as reported at [0].

Thanks!

[0]: https://lore.kernel.org/linux-mm/202408271452.c842a71d-lkp@intel.com/

----8<----
>From 53b41cc9ddfaf30f8a037f466686d942e0e64943 Mon Sep 17 00:00:00 2001
From: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
Date: Tue, 27 Aug 2024 11:59:27 +0100
Subject: [PATCH] mm: only advance iterator if prev exists

If we have no VMAs prior to us, such as in a case where we are mremap()'ing
a VMA backwards, then we will advance the iterator backwards to 0, before
moving to the original range again.

The intent is to position the iterator at or before the gap, therefore we
must avoid this - this is simply addressed by only advancing the iterator
should vma_prev() yield a result.

Reported-by: kernel test robot <oliver.sang@...el.com>
Closes: https://lore.kernel.org/oe-lkp/202408271452.c842a71d-lkp@intel.com
Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>

---
 mm/vma.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/vma.c b/mm/vma.c
index 8a5fa15f46a2..7d948edbbb9e 100644
--- a/mm/vma.c
+++ b/mm/vma.c
@@ -1557,8 +1557,8 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
 	vmg.vma = NULL; /* New VMA range. */
 	vmg.pgoff = pgoff;
 	vmg.next = vma_next(&vmi);
-	vma_prev(&vmi);
-	vma_iter_next_range(&vmi);
+	if (vma_prev(&vmi))
+		vma_iter_next_range(&vmi);

 	new_vma = vma_merge_new_range(&vmg);

--
2.46.0

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ