lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [day] [month] [year] [list]
Message-Id: <20200831015121.20036-1-richard.weiyang@linux.alibaba.com>
Date:   Mon, 31 Aug 2020 09:51:21 +0800
From:   Wei Yang <richard.weiyang@...ux.alibaba.com>
To:     viro@...iv.linux.org.uk, akpm@...ux-foundation.org
Cc:     linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
        Wei Yang <richard.weiyang@...ux.alibaba.com>
Subject: [PATCH] [RFC] exec: the vma passed to shift_arg_pages() must not have next

We can divide vma_adjust() into two categories based on whether the
*insert* is NULL or not. And when *insert* is NULL, it has two users:
mremap() and shift_arg_pages().

For the second vma_adjust() in shift_arg_pages(), the vma must not have
next. Otherwise vma_adjust() would expand next->vm_start instead of just
shift the vma.

Fortunately, shift_arg_pages() is only used in setup_arg_pages() to move
stack, which is placed on the top of the address range. This means the
vma is not expected to have a next.

Since mremap() calls vma_adjust() to expand itself, shift_arg_pages() is
the only case it may fall into mprotect case 4 by accident. Let's add a
BUG_ON() and comment to inform the following audience.

Signed-off-by: Wei Yang <richard.weiyang@...ux.alibaba.com>
---
 fs/exec.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/fs/exec.c b/fs/exec.c
index a91003e28eaa..3ff44ab0d112 100644
--- a/fs/exec.c
+++ b/fs/exec.c
@@ -682,6 +682,7 @@ static int shift_arg_pages(struct vm_area_struct *vma, unsigned long shift)
 	struct mmu_gather tlb;
 
 	BUG_ON(new_start > new_end);
+	BUG_ON(vma->vm_next);
 
 	/*
 	 * ensure there are no vmas between where we want to go
@@ -726,6 +727,8 @@ static int shift_arg_pages(struct vm_area_struct *vma, unsigned long shift)
 
 	/*
 	 * Shrink the vma to just the new range.  Always succeeds.
+	 * Since !vma->vm_next, __vma_adjust() would not go to mprotect case
+	 * 4 to expand next.
 	 */
 	vma_adjust(vma, new_start, new_end, vma->vm_pgoff, NULL);
 
-- 
2.20.1 (Apple Git-117)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ